1. 26

Let’s say you are building a simple command-line utility tool for software engineers with

  1. No GUI
  2. No goal of selling it
  3. Keeping it FOSS

What’s the best way to distribute it in 2023?

  1. With the ease of auto-updates
  2. Distribute it as a compiled binary that can be installed with a single command

I am relatively agnostic to the language I would write in.

The options that I have looked at

  1. brew is Mac only.
  2. apt is Debian only.
  3. docker requires Docker desktop to be running.
  4. pip leads to Python dependency hell
  5. go install and cargo install requires Go and cargo to be installed, respectively
  6. Distributing binaries does not lead to auto-update
    1. 29

      One correction: Homebrew is not Mac-only, as it also supports Linux.

      However, to answer your question more directly: The goal should be to make it as easily-packable and portable as possible, which means no crazy build system (I always use POSIX make with a Makefile and a config.mk-file with a configure-script that simply modifies the variables in config.mk for a given target system (see here), but your mileage may vary) and reasonable dependencies. Use semantic versioning, especially for libraries.

      For hosting: I would recommend setting up your own website with your own git or gitlab instance where you set version as tags and publish tarballs with hashes as repository-states at a given tag. It is relatively cheap to do that. As a backup, you can set up mirror repositories on GitHub and so forth, and it will work just fine, given each tag will also be mirrored as a “release”.

      Over the years I have learnt that you do not get around working with package maintainers. Ask them what you can improve to make your tool easier to package and actively send pull requests/patches to package your tool in popular distributions. Of all distributions, Debian/Ubuntu is the most difficult of all to get into and almost impossible without inside help, whereas other distributions make it very easy.

      I personally do not include version checks into my tools, but it can be done simply enough if you can live with the extra cruft in your binary (network code, parsing, etc.). In C, I would have a Makefile that contains a variable VERSION that is passed to the C-preprocessor as -DVERSION=$(VERSION), which in turn can be used in the code to compare it against the current online version.

      Regarding packaging again: I have never met an unkind package maintainer and the work they do is mostly unthanked for. One e-mail straight out asking to be included in the package sources often does wonders. :)

      The topic of Windows comes up here and there: I would target MinGW-W64 and not even start trying to port to MSVC. The advantage with MinGW-W64 is that you can use the usual POSIX tools like make and, compared to Cygwin, obtain true native binaries.

      1. 13

        As a package maintainer, this is the right answer. Windows generally has other needs, but for all the nixes, you can go through the front door of package management and offload the work to others.

        That is, so long as you do pretty standard things to build. If you’re writing go, make sure go get works in your repo. If you are expecting to build from a git checkout, make it possible to fake whatever values you are going to read from git. If you are writing something autotools, use autotools. If you are generating things with a custom compiler, at least bundle those artifacts in a tarball with the rest of your source so maintainers don’t need to find the right version of that compiler (see: https://github.com/hashicorp/vault/issues/7350 for an example of maintainers pleading with upstream to do this).

        Just do as package managers expect and you’ll be on every platform you might use and several others besides.

        1. 3

          Agreed. To add to your answer: if you’re writing go or rust, including sources of modules/crates you use in the tarballs (“vendoring”) is extremely helpful.

      2. 1

        2¢ from windows user: mingw is a good place to start but it can get annoying for the user end, if you have the time and willing to learn a different platform, using the native tooling is the best way to go!

        Though you generally only run into this problem if you’re using a low-level language that cares about the compiler etc. if you’re using something like Go it’ll be a non-issue, Rust is pretty decent with using the proper tooling for builds and the binaries you get are normal.

        As for packaging, Scoop is great and it’s easy for folks to install if they don’t already have it, it’s the Brew of Windows essentially. Winget is fairly simple to use too and is becoming the sort of pre-installed standard so if your tool may be used by newbies then Winget is probably the best way to go.

        Hope that helps!

        1. 2

          MinGW is a truly weird thing. It mixed *NIX and Windows ABIs in ways that cause a lot of impedance mismatches. I’ve never encountered a situation where it’s solved more problems than it’s created. If you’re writing any language that’s higher-level than C (including C++) then your standard library likely has abstractions that paper over the differences between POSIX and Win32 more conveniently than MinGW. If you are writing something against POSIX then MinGW’s interpretation of POSIX is subtly different to any *NIX system in so many exciting ways that you may find it easier to write your own Win32 / POSIX abstraction layer than to use it.

    2. 28

      Personally speaking, I don’t see the benefit of auto-updates here. I’d suggest distributing it as source, so that it can be packaged by third-parties into Linux distros, into the FreeBSD ports tree, etc.

      Then, updates can be handled by those downstreams as appropriate.

    3. 17

      As a user, I don’t want auto-updates. I’d rather the package manager did it or I did it manually. For most command line tools I’m happy to manually download binaries from the project website or github releases if my package manager doesn’t have an acceptable version.

      For getting into package repos, just use some language with a clear standard toolchain for packaging binaries, use that toolchain and politely email the repo maintainers. Rust and Go have obvious toolchains that produce static binaries. If you use unusual dependencies then it could still be a pain to get into package repos because the maintainers will often want to package up the libraries separately from your tool (which is a bit shit, because then you don’t get control over what exact version your dependencies are).

      Edit: if your tool requires regular updates to continue to work (e.g. it depends on some internet service whose API changes regularly) then a warning that I should update when I run it is fine if it has to connect to the internet anyway. Ideally, the tool shouldn’t talk to your server unless it has to.

      1. 6

        Just to be clear, I don’t want magical auto-updates either. I don’t want to push auto-updates. What I meant was the ability for users to update via their package manager e.g. brew upgrade

        1. 5

          The question really boils down to a separation of concerns. In most non-Windows ecosystems (including Homebrew), author and packager are separate roles. The packager knows the details of the target platform and the packaging system, the author knows the details of the program. They might be the same person. More commonly, they are the same person for some platforms, but not others. As an author, you can make packagers happy by doing a few things:

          • Make it easy to mechanically discover dependencies. If your language ecosystem has a standard way of doing this, use it. Otherwise, at least provide a clear list of them. The packager will typically want to use packaged versions, or at least register dependencies for auditing.
          • Don’t require an internet connection during the build. Secure package-build infrastructure is sandboxed.
          • Use a build system that other people build. If you use CMake, for example, it’s one line for me to tell the FreeBSD ports system how to build, and it will build with Ninja by default and so scale well on the build cluster. If you write custom scripts, it’s more work.
          • Use clean platform abstractions. If someone wants to support a different OS, they shouldn’t need to go and find everywhere where you’ve written ‘if Linux’ and then figure out that what you meant was ‘if not MacOS’.
          • Put at many platforms and architectures in CI as you can. Even if you don’t test on the ones I care about, testing on more means it’s more likely to work for me.
          • Provide clear build instructions. Don’t make me guess what some environment variable needs to be.
          • If you autodetect dependencies, make it possible to opt out and specify them explicitly. When packaging, you want to be clear about everything that’s used.
        2. 1

          Makes sense. One cross-platform option that’s a bit of a hack is to distribute your binaries on npm: https://blog.xendit.engineer/how-we-repurposed-npm-to-publish-and-distribute-our-go-binaries-for-internal-cli-23981b80911b

          It’s very convenient if your audience is likely to have npm or yarn or whatever installed.

    4. 8

      If you do not need to support Win32, you could package it as a Nix flake. Nix is a build-from-source package manager, but anything upstreamed to nixpkgs will be built once by hydra.nixos.org and then anyone who needs the exact same version of the package will be able to download the build artifacts rather than build them locally.

      Another option would be to add Nix flake configs to the tool’s repo, which would enable users to try the command without installing it via nix run git+https://your.git.host/ashishb/your-app-repo#app-name (which you could wrap in a script in /usr/local/bin for ease of use). Nix then automatically downloads the app and runs it. The user automatically gets the latest version unless they pin it.

      Caveats:

      1. Your users are much more likely to have brew, apt, or docker installed than nix. Though, nix is not too hard to install, and doesn’t interfere with the system when you’re not using it, so it’s not a huge ask.
      2. Using nix-env may require users to add Nix paths to their PATH, so it’s not a single-command install per se.
      3. With a nix flake outside nixpkgs, users probably won’t benefit from the binary cache, so they’ll end up building your app from source locally. You could probably work around this, but not in a way that makes it still single-command.
      1. 1

        If you do not need to support Win32, you could package it as a Nix flake.

        This is pretty much what I do for my own projects. With nix run github:srid/emanote, for instance, you could run Emanote directly using Nix (it downloads and builds it on local nix store, without mutating your user profile).

        Using nix-env may require users to add Nix paths to their PATH, so it’s not a single-command install per se.

        Note: The flake-version is nix profile install. But honesty I wouldn’t recommend people install it imperatively, and instead use something like home-manager. I have a template in works for that.

        With a nix flake outside nixpkgs, users probably won’t benefit from the binary cache, so they’ll end up building your app from source locally. You could probably work around this, but not in a way that makes it still single-command.

        You can add the cache to nixConfig of flake.nix.

        1. 1

          Aha! Enabling third-party substituters seemed like an obvious feature. Thanks for pointing to the configs for it.

      2. 1

        I also tend to just bundle nix expressions or flakes for my projects. It’s the format I need myself, and it should suffice for anyone packaging in another format to understand more or less what they’ll need.

        As others observe, no auto-update (but I see this as a bit of a misfeature anyways and only ~tolerate it for a few GUI apps…)

        1. 2

          It sounds like OP doesn’t want auto-update per se, from another comment, but rather for the tool to be updateable from the package manager. I think Nix would cover whatever interpretation of their use case they ultimately have, as long as it isn’t self-updating.

    5. 7

      I’ve been exploring this problem with my https://datasette.io/ project for the last five years now. Here’s everything I’ve tried so far:

      Those were my first attempts. More recently I’ve tried some more sophisticated approaches:

      The only avenue I haven’t fully explored is creating a compiled standalone executable that bundles Python under the hood. https://github.com/indygreg/PyOxidizer is a leading contender for that, should I decide to go in that direction.

      I’m not 100% happy with any of these solutions. This is such a difficult and persistent problem!

    6. 6

      I’d suggest that auto-update is asking too much–have a functionality to check if updates are available, but leave the sysadmin work to the users and the ecosystem managers. If you do that, then you can just throw up a pile of binaries under Github releases or whatever.

      1. 2

        So is it ok for the binary to phone home regularly to check for update? I thought most people would be uncomfortable with that.

        1. 4

          Oh, no no no, not unless enabled explicitly. I mean, have a --check-updates argument or whatever so I as a user can script it or check myself.

        2. 2

          Fortunately, most distros will remove or disable your phone-home code.

          1. 4

            I’m not planning to phone home. I want to give users an option to easily upgrade e.g. brew upgrade or pip upgrade.

        3. 1

          i do this for some internal work tooling… but it doesn’t “phone home” it just hits an API for where my binaries are uploaded…

          if you have releases in github, take the example of casey’s excellent just:

          ❯ curl -L -s https://api.github.com/repos/casey/just/releases\?page\=1\&per_page\=1 | jq '.[0].name' -r
          1.13.0
          

          in my tool i take this output and compare it to the tool’s current version and output a message if it’s older (“hey, a new update is available”)

          of course i fail gracefully and quickly if connectivity isn’t there (short timeout)

          i wouldn’t call that “phoning home”

          1. 2

            i wouldn’t call that “phoning home”

            Users would - it’s still giving you an idea of how many people are using it and from where (source IP address), and you could start shipping extra information in the URL if and when you please. But if it’s just for work, who cares.

            1. 1

              you certainly CAN, but this is going to github’s API, so the only one collecting data is possibly github, and there is a distinction there for a user, I think…. this is very different from running actual telemetry… which is useful in it’s own right

    7. 5

      Have you looked at Flatpak? https://flathub.org/ for example. You can also host your own, but flathub is the de facto standard.

      1. 4

        Flatpack is a portable packaging system that works well on Linux, Linux, and Linux, (but not Linux).

      2. 2

        Flatpak is nice, but it is oriented towards GUIs and available runtimes reflect that. Perhaps you know of a smaller runtime without GUI dependencies?

        Flatpak also requires you (as far as I know) to use the flatpack command to run the app, so it’s probably not what OP is looking for, unfortunately.

        1. 3

          Ah yes, for command line programs this might not be the best choice. I answered too quickly, my apologies.

        2. 1

          Hmmm, I like flatpak in principle, but not having a “CLI” runtime does seem kinda odd. Having a nice way for an application to request a shortcut or symlink in a known location (~/.local/bin/ perhaps?) would also be nice.

          1. 1

            Flatpak does install command line wrappers, but the directory is not added to $PATH by default and the executable names can only be the RDNN package name. I can type org.inkscape.Inkscape to use Inkscape’s command-line editing features, for example.

            1. 1

              Thanks. Makes perfect sense, but is also a real annoying.

      3. 1

        https://github.com/flatpak/flatpak/issues/1188 flatpak for distribution + updated, but maybe short list of commands (and script to copy&paste for running) to install

    8. 5

      I always prefer to install utilities, whether GUI, TUI, or CLI, from my package manager. Only as a fallback, I use statically linked binaries, such as those produced with Go or Rust. I will not use Snap, Flatpak, AppImage, Docker, or any of the rest. I only use pip in a virtual env when creating Python tools. I think a Google Play for Linux (Snap/Flatpak) should be very aggressively avoided.

      There are established mechanisms for software distribution on Linux, they have worked and been reliable for the last few decades, and I don’t believe there’s any reason to switch.

      FRIGN’s advice is also fantastic.

    9. 4

      With regard to option 2, getting your package into some package repos for a few major distributions does give you roughly this (modulo the limitations of each distro). I (at least notionally) try to target Gentoo, Debian, Ubuntu (specifically, though if you get into Debian properly then you will eventually be ported into Ubuntu), and Arch Linux when I’m not developing something tied to some language or environment’s package manager. But you might add one each of the Mac and Windows package management tools as well.

      Edit: doing this (and also specifically structuring your program so that it is easily packageable), and providing .deb’s for direct download, means that distro maintainers for other distros will tend to be pretty willing to do the ‘last mile’ packaging, getting you fairly complete coverage.

      1. 3

        Let’s assume I have a way of generating arch-specific static binaries

        Any easy way to automate publishing the packages into

        1. homebrew
        2. Ubuntu apt-get
        3. Chocolatey (Windows)
        1. 2

          cargo-dist might be able to do that in future.

        2. 1

          So, usually you would work out how to package the project for all of these as part of your CI (rather than ‘just generating a static binary’), and then on a tag release automatically push the generated artifact to the relevant package manager. Eg I’ve seen people use GitHub Actions to wrap the static binary in the wrapper that Chocolatey needs and then push it to Chocolatey.

          But the exact ‘how’ depends on the details of your whole thing. Eg for packaging Rust things for Debian it is actually a lot easier than that, you typically wouldn’t compile a static binary, you only need a debian directory in the root of your Rust project with a correctly formatted copyright file and a debcargo.toml file, which are processed automatically and compiled and distributed by the Debian infrastructure once you have registered your package with the Debian rust packaging team. Similar for Gentoo except you need a gentoo directory with an ebuild file, and distributing binary packages requires a bit more setup on your end instead of being completely automatic on the distro infrastructure end.

          Basically, you do need to learn a bit about ‘the maintainer life’ across the major platforms you want to release on, but the upside is that you get those nice ‘native OS ergonomics’.

    10. 4

      If you can drop the auto updates and just distribute self contained executable, the cosmopolitan might be an interesting option.

    11. 2

      I think the auto-update will be the hard part. Absent auto-update, shipping binaries written in Go or Zig is pretty easy (since they both have stellar cross-compilation support). You can also package up python using zipapp so that all the user needs to have pre-installed is python, and you ship your app self-contained with all its dependencies.

      Auto-update is hard because each OS will do its own thing for package management. You could distribute through a third-party package manager, though. Both nixpkgs and Homebrew can be installed onto an existing Linux or macOS install.

      yt-dlp has an --update flag which will cause the binary to check for updates and try to update itself, but you need to be careful with mechanisms like that to avoid security issues.

    12. 2
      1. let people update it with their regular packages, so they never get broken by a bug fix the day of a big demo
      2. regular package managers have done this. The package manager infrastructure gets go or cargo installed so the end user doesn’t need to, and the end user gets a single command they are used to using, that gets them your tool.

      Apt and brew can be both fed by the same tarball. You are distributing open source, so distribute source! This comment is really good: https://lobste.rs/s/wk9qye/best_way_distribute_foss_tool_2023#c_gdpsei And if you use an expected build mechanism, even for python packages, then you might end up packaged on a dozen different platforms you’ve never even thought of.

    13. 2

      Compile to a binary or single script and provide instructions to download and copy to /usr/bin or equivalent location. Still the cleanest IMO

    14. 2

      Nobody uses package managers on Windows and they’re a last resort on macOS so you are gonna have to roll your own auto-update unfortunately.

      On the Linux side, tbh pretty much the only software I’ve interacted with that “just works” and isn’t a nightmare of broken shit has been software that entirely sidesteps the traditional Linux ecosystem. So software that is either written in Go, or distributed as containers. With Go, you go to the project’s github, download a 100% static linked binary, and it works 100% of the time forever. Containers are less reliable because you have a gigantic runtime, but you can at least find static linked versions of podman etc.

      Beyond that you’ll have to accept that your software will not work a significant % of the time for your users for reasons that are entirely out of your control.

      1. 4

        they’re a last resort on macOS

        What now? brew install xyz is the first command I run on my mac when I try to install xyz. Only if that command fails, I google (DDG, in fact) the installation instructions.

      2. 4

        They’re a last resort on macOS

        While this is likely true for the general macOS audience, I kind of think that the overlap between “people who use CLI tools regularly” and “people who have Homebrew installed” is basically 100% minus whoever uses MacPorts :-).

      3. 2

        Nobody uses package managers on Windows

        That’s false. Both choco and winget are decent and have decent usage for advanced users.

      4. 1

        Nobody uses package managers on Windows

        The existing package managers on Windows haven’t been widely used, but I feel like now that there is an official one, winget, that it’s going to see much wider use.

      5. 1

        I’ve seen a growing use of Scoop and winget in recent years, I think folks are realising that it’s still going to be the most widely used OS and MS are always making it a nicer environment to dev in!

        Brew taps seem to work well for CLI tools, I’m not sure I’m familiar with the sentiment that it’s a “last resort” but that might just be my bubble! Out of curiosity, are there alternatives?

    15. 2

      For musicians who want to distribute their songs on multiple streaming platforms there exist services which do exactly that: provide a single, unified interface and take care of publishing on different services, adopting to different rules behind the scenes. I was wondering if a similar service exists for FOSS.

    16. 2

      Have a source repository, build a package for the distribution that’ you’re using, and allow other distro communities to create similar packages if they’re interested (which in truth, they’ll probably not be).

      I have a couple of small C applications like that and, outside of a couple of enthusiasts from Fedora, nobody was interested in packaging it, and that was fine. I have it for myself where I need it and that’s enough for me.

    17. 2

      Create a repository, maybe a GitHub morror.

      Make your software easy to build and deeply.

      Use tags or some rest standard way for robes, sane version numbers.

      Then making packages, installers and even auto updates on windows if needed all are easier to do both by you and other people.

      Help porters when you can. Even in obscure platforms. Usually that has a lot of positive side effects even when not initially obvious.

      Still make binaries available, even when you don’t consider it the best way to install your software.

      Though I haven’t used it to build software if you want to create packages for many Linux distributions yourself OBS could be handy. I mention it for giving slower moving distributions an option that isn’t using another package manager (like snap or flatpak).

      https://openbuildservice.org/

      Some projects have guides for maintainers, but that’s only needed if it’s non obvious how to best build a package, do if things are different from the usual way in similar projects.

    18. 2

      Guix

    19. 1

      Without knowing more about the requirements Go is my default choice for CLI tooling. Self-contained executables for binary distribution, easy and fast builds for source distribution. First-class support for cross-compilation can be major too.

      I am happy to see Go source code as a user too. I know that it will be straightforward and quick to build. Not a fan of magical self-updates and if that was the only option I’d probably pass on the tool. Easy to build source please. But I guess this greatly depends on your target audience.

    20. 1

      A docker-format container does not (always) require docker desktop to be running. On a Linux host, you can use Podman which is daemonless, and supports user-invoked (unprivileged) containers too. Yes via docker desktop you get windows and mac coverage. Container UX is clunky for CLI apps, IMHO.

      1. 1

        Padman is better but it is still less popular than Docker Desktop. Further, Podman requires a VM engine to be installed afaik.

        1. 1

          Podman on Linux does not require a VM engine. It might do on mac (just like docker desktop does)

    21. 1

      If you are willing to give up on the auto-update part, https://asdf-vm.com is worth considering. As requirements, it only requires bash, and basic unix tools like curl or git.

    22. 1

      It entirely depends on what FOSS tool does and who are your target users. Personally I prefer docker containers, but I run Linux on my laptop. What I want from the distribution method: easy installation, easy upgrades, easy and clean uninstall. Docker solves all those problems. A single binary is OK too, although not as convenient.