1. 6

    Lots of companies pay for software. There’s a whole giant industry selling commercial software… which leads to the question of why not making it proprietary.

    1. What sort of product is it?:
    2. Who would benefit from availability of source?
    3. Who would benefit from it being open source (you can give people source code under a proprietary license, so this is a different question than the previous one)?

    (I’m working on commercial product, with open source variant with slightly different use case as marketing, and … a bunch of people use the open source tool, and I’ve only gotten a single patch, ever. It’s not clear what being open source does for anyone in this particular example.)

    1. 1

      You have a good point, so let me answer your questions:

      1. It is a tool meant for developers: a build system.
      2. Everyone; it is actually crucial to the software supply chain that the source is available. If the build system is not Open Source (i.e., you can’t compile it yourself), you don’t know if it has been backdoored with a Trusting Trust attack, just like a compiler.
      3. End users. If it’s only source-available, then companies that distribute software that builds with it could conceivably make it really hard to build their software, even if that software is FOSS or source-available.

      But beyond the fact that it is actually crucial to be FOSS for security, there is another big reason: developers will not adopt a non-FOSS tool. If it is FOSS, it has a chance, and if it is not, then it has none.

      1. 4

        There are many build tools out there that are very successful and not open source. TeamCity is a good example.

        1. 3

          But beyond the fact that it is actually crucial to be FOSS for security, there is another big reason: developers will not adopt a non-FOSS tool. If it is FOSS, it has a chance, and if it is not, then it has none.

          Open source isn’t a requirement for commercially successful build tools; Incredibuild is a proprietary build system used by Adobe, Amazon, Boeing, Epic Megagames, Intel, Microsoft, and many other companies. Most of the market consists of pragmatists; they’ll adopt a new product if it addresses a major pain point.

          Is there a distributed build tool for Rust yet? That may be a market worth pursuing.

          1. 1

            I did not expect anyone to say that closed-source build systems were used, but you and a sibling named two.

            As far as making a distributed build tool for Rust, yeah, I can do that. Thank you.

          2. 1

            It is a tool meant for developers: a build system.

            I am curious how are you planning to legally structure dual-licensing of a build system. I believe most (all?) examples of dual-licensing where one license is free/open source involve a copyleft license (commonly GPL). In order to trigger copyleft’ness the user must produce a derivative work of your software (e.g., link to your library). I don’t see how using a build system to build a project results in derivative work. I suppose there are probably some dual-licensed projects based on AGPL but that doesn’t seem to fit the build system either.

            I also broadly agree with what others have said about your primary concern (that the companies will steal rather than pay): companies (at least in the western economies) are happy to pay provided prices are reasonable and metrics are sensible (e.g., many would be reluctant to jump though licensing server installation, etc). But companies, especially large ones, are also often conservative/dysfunctional so expect quite a bit of admin overhead (see @kornel comment). For the level of revenue you are looking at (say, ~$300K/year), I would say you will need to hire an admin person unless you are prepared to spend a substantial chunk of your own time doing that.

            This is based on my experience running a software company (codesynthesis.com ) with a bunch of dual-licensed products. Ironically, quite a bit of its revenue is currently used to fund the development of a build system (build2; permissively-licensed under MIT). If you are looking to build a general-purpose build system, plan for a many-year effort (again, talking from experience). Good luck!

            1. 1

              I am curious how are you planning to legally structure dual-licensing of a build system.

              It will also be a library.

              There are plenty of places in programming where it is necessary to be able to generate tasks, order those tasks to make sure all dependencies are fulfilled, and run those tasks (hopefully as fast as possible).

              One such example is a init/supervision system. There are services that need to be started after certain others.

              (Sidenote: I’m also working on an init/supervision system, so technically, companies don’t need to make their own with my library. It’s just an example.)

              I suppose there are probably some dual-licensed projects based on AGPL but that doesn’t seem to fit the build system either.

              This build system will be distributable, like Bazel, so yes, that does apply.

              I also broadly agree with what others have said about your primary concern (that the companies will steal rather than pay): companies (at least in the western economies) are happy to pay provided prices are reasonable and metrics are sensible (e.g., many would be reluctant to jump though licensing server installation, etc).

              What are reasonable prices, though?

              But companies, especially large ones, are also often conservative/dysfunctional so expect quite a bit of admin overhead (see @kornel comment). For the level of revenue you are looking at (say, ~$300K/year), I would say you will need to hire an admin person unless you are prepared to spend a substantial chunk of your own time doing that.

              I am going to do it, yes, but I’m also going to be helped by my wife.

              This is based on my experience running a software company (codesynthesis.com ) with a bunch of dual-licensed products. Ironically, quite a bit of its revenue is currently used to fund the development of a build system (build2; permissively-licensed under MIT). If you are looking to build a general-purpose build system, plan for a many-year effort (again, talking from experience). Good luck!

              Oh, I’m cutting features out of my build system, so I don’t expect it to take that long. Also, I’m not running a business like you are.

              Thank you.

              1. 2

                What are reasonable prices, though?

                The video Designing the Ideal Bootstrapped Business has some excellent advice on pricing; the author has sold at least 3 startups.

        1. 3

          I hope that other tools beyond Fil and Fil4prod start adopting these improvements [icicle mode, showing more text, better colors, line numbers], and look forward to seeing what further improvements we can find to these visualizations.

          I’ve been using Speedscope for a while, and I think it has had all of these features for a while. Instead of right-aligning text but still truncating it, it just allows you to zoom in. It doesn’t include source code, but for me function names are actually more readable.

          1. 1

            It doesn’t do the colo-saturation-matches-width, though, which I think is the most important change.

            1. 1

              Isn’t the saturation=width a bit misleading though? If you have a loop which calls a short foo() thousands of times, each one will be very pale. But if you choose random colour per-function while keeping saturation, that allows spotting common patterns.

              They seem to work well in different scenarios rather than one being better than the other.

              1. 2

                No, in a flamegraph the X axis isn’t time. So if you have lots of calls to a single function, they will end up merged into one single (long) frame if you’re profiling performance. (I will try to update the page tomorrow to make that explicit.)

                Update: Or rather, the X axis width is a percentage (of memory allocations, or time if this is performance), but the order is not meaningful. So identical callstacks get combined into a longer frame.

                1. 1

                  In this specific flamegraph it isn’t, but you can generate an over-time samples based flamegraph too. Like Ruby stackprof does in raw mode for example.

                  But even here the difference still holds for cases with auto-generated structures - think large parser / visitor pattern with small things hanging off the different types of nodes.

                  1. 2

                    There are reverse flamegraphs that start from the opposite site of the stack, for places like recursion where you have repeating pattern at bottom of the stack but the top differs. Fil generates both in its reports.

                    In any case: the point is not so much that the default coloring scheme for flamegraphs is useless, it’s just that I believe it’s a bad default.

            2. 1

              Interesting. This looks very much like Catapult: https://chromium.googlesource.com/catapult/+/HEAD/tracing/README.md

            1. 10

              I’m excited about the multiple profile support; previously you could only have release and dev, now you can have variations.

              For example, I’m shipping two versions of my code, one with debug assertions disabled and one with those enabled; the idea is if there’s a problem, people can rerun with the version that will catch problems earlier. Currently the version with debug assertions enabled is a full-on dev profile, more tuned for development than running potentially long-running process… but ideally it’d be tweaked version of the production profile.

              1. 2

                I think a “small code” profile of some sort would be useful for many projects.

                1. 1

                  You could do that already with the optimization level. But previously you couldn’t easily build e.g. three variants (performance, small size, debug/test), you’d have to manually edit a config file or twiddle things to switch back and forth.

                  1. 3

                    Exactly, this is what I meant. Most projects will want the normal “release” build, but I can see it being helpful to also have the “size” build profile.

              1. 1

                Correct me if I’m wrong, but Mamba appears to require Conda during its installation process. Why is it the case given that Mamba is a re-implementation?

                By the way, Mamba’s integration with fish seems second-class, and it’s annoying that I must use conda activate foo instead of mamba activate foo.

                1. 4

                  Mamba is a work-in-progress reimplementation. So it falls back to Conda for parts that haven’t been redone.

                  There’s a just-Mamba version called micromamba that is a self-contained single-file executable, but e.g. it doesn’t yet support pip package installs from environment.yml and is mostly just intended for bootstrapping environments in CI or Docker, not as a day-to-day tool for development.

                1. 1

                  One example is cryptography, by moving to rust implementation, the build time is now forever and takes gigabytes of disk.

                  1. 7

                    Like basically any package that involves compiled extensions, they provide pre-built binary .whl packages, so pip install cryptography should not involve having to compile anything on the target machine. Any extra time/space required by the compilation is thus incurred only by the people who build the packages.

                    1. 2

                      Unfortunately there are no ARM packages, so having to install anything that uses cryptography on a Raspberry Pi takes… well, I don’t know how long it takes, I only started it two months ago.

                      1. 3

                        There are aarch64 packages on PyPI: https://pypi.org/project/cryptography/#files

                        One thing to try is upgrading pip before you install cryptography, old versions of pip won’t know about manylinux2014 for example.

                        1. 1

                          Ahh, I will try that next time, thank you! Just have been that.

                      2. 1

                        That’s exactly what I did to upgrade the py3-cryptography in alpine.

                        1. 1

                          For quite some time all Python .whl packages with compiled extensions were compiled with glibc (due to being the easiest way to define a lowest-common-denominator “Linux” compilation target), which meant they were not compatible with musl-based distros. There is now a platform tag on PyPI for packages built against musl, and the cryptography project now provides .whl pre-compiled packages using musl (as well as ones for glibc).

                          1. 1

                            That would be nice, however that also requires package maintainers to rebuild the whls. That feature is only added in September for cibw, so it might take a while to get fully adopted.

                    1. 2

                      Every time I’ve wanted to extend python with more than a function or two, swig has been the right answer for me. But I’ve always had a C or C++ codebase that I wanted to use in the extension.

                      For C or C++, it eliminates a lot of the verbose boilerplate the article complains about.

                      1. 2

                        Ah yes, I’ll mention that; it also does other languages, I think? I would still expect pybind11 to be a lot nicer for C++ though, based on previous experience with boost::python.

                        1. 1

                          It does. We’ve used it before to generate interfaces for the same library for C#, python and java. Last time I tried boost::python it was good but you kind of had to be all-in on boost::build, and (not python-related) we weren’t. If I had to do it again and only cared about targeting python, I’d definitely look at it again now that there’s a Cmake build system for boost.

                          1. 2

                            Ah, so, pybind11 is like boost::python without going all in on Boost, is my understanding.

                            Think of this library as a tiny self-contained version of Boost.Python with everything stripped away that isn’t relevant for binding generation.

                            1. 1

                              Reading those docs, pybind11 looks really nice. I think if I had a library where my swig interface file started to get non-trivial, if I knew I just needed python I’d try pybind11 before writing a large swig interface.

                              (One of the cool things about swig is that you often don’t have to do more than write a couple marshal functions, then just mark up your C or C++ headers and it works without much fuss. Occasionally, that doesn’t age well and you wind up writing a lot of swig interface.)

                        2. 2

                          For anyone wondering about production uses of swig, Google uses it to generate Python bindings for internal libraries.

                          1. 1

                            SWIG left a bad taste in my mouth in 2004. I was using some Python bindings for a few fairly obscure Windows APIs, specifically Microsoft Active Accessibility, low-level keyboard hooks, and the Microsoft Speech API. On the one hand, I’m indebted to the developers of these bindings for helping me get started writing a Windows screen reader when I was still pretty new to Windows programming. On the other hand, my dim recollection is that the Active Accessibility bindings in particular were crash-prone. I dug into that code at the time, but I don’t really remember what I found. It’s possible that those bindings were stretching SWIG beyond what it could really handle; after all, these bindings were for a COM-based API. But in any case, after that, I wanted to stay away from SWIG as much as I could.

                          1. 4

                            Oh boy, looking forward to airlines getting sued.

                            1. 4

                              Now that I think about it, Halakhic Judaism as a domain of business logic has some similarities to airline fare rules: extremely complex, defined by example, and everyone does it slightly differently.

                              1. 2

                                Do they charge you an extra $50 for checked baggage if you weren’t Jewish before you got to the promised land?

                              1. 6

                                It seems to me that if one is going to go that far off the beaten path (i.e. not just running “docker build”), then it would also be worth looking into Buildah, a flexible image build tool from the same group as Podman. Have you looked into Buildah yet? I haven’t yet used it in anger, but it looks interesting.

                                1. 6

                                  +1000 for Buildah.

                                  No more dind crap in your CI.

                                  Lets you export your image in OCI format for, among other useful purposes, security scanning before pushing, etc.

                                  Overall much better than Docker’s build. Highly recommend you try it.

                                  1. 3

                                    Added looking into it to my todo list, thanks for the suggestion @mwcampbell and @ricardbejarano.

                                    1. 2

                                      Im intrigued, what do you use for security scanning the image?

                                      1. 4

                                        My (GitLab) CI for building container images is as follows:

                                        • Stage 1: lint Dockerfile with Hadolint.
                                        • Stage 2: perform static Dockerfile analysis with Trivy (in config mode) and TerraScan.
                                        • Stage 3: build with Buildah, export to a directory in the OCI format (buildah push myimage oci:./build, last time I checked, you can’t do this with the Docker CLI), pass that as an artifact for the following stages.
                                        • Stage 4a: look for known vulns within the contents of the image using Trivy (this time in image mode) and Grype.
                                        • Stage 4b: I also use Syft to generate the list of software in the image, along with their version numbers. This has been useful more times than I can remember, for filing bug reports, comparing a working and a broken image, etc.
                                        • Stage 5: if all the above passed, grab the image back into Buildah (buildah pull oci:./build, can’t do this with Docker’s CLI either) and push it to a couple of registries.

                                        The tools in stage 2 pick up most of the “security bad practices”. The tools in stage 4 give me the of known vulnerabilities in the image’s contents, along with their CVE, severity and whether there’s a fix in a newer release or not.

                                        Having two tools in both stages is useful because it increases coverage, as some tools pick up vulns that others don’t.

                                        Scanning before pushing lets me decide whether I want the new, surely vulnerable image over the old (which may or may not be vulnerable as well). I only perform this manual intervention on severities high and critical, though.

                                        1. 1

                                          Thanks for the response. What are your thoughts on https://github.com/quay/clair which seem to replace both Gripe and Trivy?

                                          1. 1

                                            I haven’t used it, can’t judge.

                                            Thanks for showing it to me.

                                      2. 1

                                        I’ve never used dind, but have only used Jenkins and GitHub Actions. Is that a common thing?

                                        1. 1

                                          IIRC GitHub Actions already has a Docker daemon accessible from within the CI container. So you’re already using Docker in Whatever on your builds.

                                          There are many problems with running the Docker daemon within the build container, and IMO it’s not “correct”.

                                          A container image is just a filesystem bundle. There’s no reason you need a daemon for building one.

                                      3. 4

                                        I have not looked at it, but my understanding is that Podman’s podman build is a wrapper around Buildah. So as a first pass I assume podman build has similar features. It does actually have at least one feature that docker build doesn’t, namely volume mounts during builds.

                                        1. 2

                                          If I remember correctly, the Buildah documents specify that while yes - podman build is basically a wrapper around Buildah - it doesn’t expose the full functionality of Buildah, trying to be more of a simple wrapper for people coming from Docker. I can’t recall what specific functionality was hidden from the user, but it was listed in the docs.

                                      1. 5

                                        Followed the suggestions, cut end-to-end time (when the cache works) from 20 minutes to 11 minutes.

                                        1. 10

                                          Q: Why choose Docker or Podman over Nix or Guix?

                                          Edit with some rephrasing: why run containers over a binary cache? They can both do somewhat similar things in creating a reproductible build (so long as you aren’t apt upgradeing in your container’s config file) and laying out how to glue you different services together, but is there a massive advantage with one on the other?

                                          1. 28

                                            I can’t speak for the OP, but for myself there are three reasons:

                                            1. Docker for Mac is just so damn easy. I don’t have to think about a VM or anything else. It Just Works. I know Nix works natively on Mac (I’ve never tried Guix), but while I do development on a Mac, I’m almost always targeting Linux, so that’s the platform that matters.

                                            2. The consumers of my images don’t use Nix or Guix, they use Docker. I use Docker for CI (GitHub Actions) and to ship software. In both cases, Docker requires no additional effort on my part or on the part of my users. In some cases I literally can’t use Nix. For example, if I need to run something on a cluster controlled by another organization there is literally no chance they’re going to install Nix for me, but they already have Docker (or Podman) available.

                                            3. This is minor, I’m sure I could get over it, but I’ve written a Nix config before and I found the language completely inscrutable. The Dockerfile “language”, while technically inferior, is incredibly simple and leverages shell commands I already know.

                                            1. 15

                                              I am not a nix fan, quite the opposite, I hate it with a passion, but I will point out that you can generate OCI images (docker/podman) from nix. Basically you can use it as a Dockerfile replacement. So you don’t need nix deployed in production, although you do need it for development.

                                              1. 8

                                                As someone who is about to jump into nixos, Id love to read more about why you hate nix.

                                                1. 19

                                                  I’m not the previous commenter but I will share my opinion. I’ve given nix two solid tries, but both times walked away. I love declarative configuration and really wanted it to work for me, but it doesn’t.

                                                  1. the nix language is inscrutable (to use the term from a comment above). I know a half dozen languages pretty well and still found it awkward to use
                                                  2. in order to make package configs declarative the config options need to be ported to the nix language. This inevitably means they’ll be out of date or maybe missing a config option you want to set.
                                                  3. the docs could be much better, but this is typical. You generally resort to looking at the package configs in the source repo
                                                  4. nix packages, because of the design of the system, has no connection to real package versions. This is the killer for me, since the rest of the world works on these version numbers. If I want to upgrade from v1.0 to v1.1 there is no direct correlation in nix except for a SHA. How do you find that out? Look at the source repo again.
                                                  1. 4

                                                    This speaks to my experience with Nix too. I want to like it. I get why it’s cool. I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg) and the thing I want most is to define my /etc files in their native tongue under version control and for it all to work out rather than depend on Nix rendering the same files. I could even live with Nix-the-language if that were the case.

                                                    1. 3

                                                      I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg)

                                                      As a former Google SRE, I completely agree—GCL has a lot of quirks. On the other hand, nothing outside Google compares, and I miss it dearly. Abstracting complex configuration outside the Google ecosystem just sucks.

                                                      Yes, open tools exist that try to solve this problem. But only gcl2db can load a config file into an interactive interface where you can navigate the entire hierarchy of values, with traces describing every file:line that contributed to the value at a given path. When GCL does something weird, gcl2db will tell you exactly what happened.

                                                    2. 2

                                                      Thanks for the reply. I’m actually not a huge fan of DSLs so this might be swaying me away from setting up nixos. I have a VM setup with it and tbh the though of me trolling through nix docs to figure out the magical phrase to do what I want does not sound like much fun. I’ll stick with arch for now.

                                                      1. 6

                                                        If you want the nix features but a general purpose language, guix is very similar but uses scheme to configure.

                                                        1. 1

                                                          I would love to use Guix, but lack of nonfree is killer as getting Steam running is a must. There’s no precedence for it being used in the unjamming communities I participate in, where as Nix is has sizable following.

                                                          1. 2

                                                            So use Ubuntu as the host OS for Guix if you need Steam to work. Guix runs well on many OS

                                                    3. 10

                                                      Sorry for the very late reply. The problem I have with nixos is that it’s anti-abstraction in the sense that I elaborated on here. Instead it’s just the ultimate wrapper.

                                                      To me, the point of a distribution is to provide an algebra of packages that’s invariant in changes of state. Or to reverse this idea, an instance of a distribution is anything with a morphism to the category of packages.

                                                      Nix (and nixos) is the ultimate antithesis of this idea. It’s not a morphism, it’s a homomorphism. The structure is algebraic, but it’s concrete, not abstract.

                                                      People claim that “declarative” configuration is good, and it’s hard to attack such a belief, but people don’t really agree on what really means. In Haskell it means that expressions have referential transparency, which is a good thing, but in other contexts when I hear people talk about declarative stuff I immediately shiver expecting the inevitable pain. You can “declare” anything if you are precise enough, and that’s what nix does, it’s very precise, but what matters is not the declarations, but the interactions and in nix interaction means copying sha256 hashes in an esoteric programming language. This is painful and as far away from abstraction as you can get.

                                                      Also notice that I said packages. Nix doesn’t have packages at all. It’s a glorified build system wrapper for source code. Binaries only come as a side effect, and there are no first class packages. The separation between pre-build artefacts and post-build artefacts is what can enable the algebraic properties of package managers to exist, and nix renounces this phase distinction with prejudice.

                                                      To come to another point, I don’t like how Debian (or you other favorite distribution) chooses options and dependencies for building their packages, but the fact that it’s just One Way is far more important to me than a spurious dependency. Nix, on the other hand, encourages pets. Just customize the build options that you want to get what you want! What I want is a standard environment, customizability is a nightmare, an anti-feature.

                                                      When I buy a book, I want to go to a book store and ask for the book I want. With nix I have to go to a printing press and provide instructions for printing the book I want. This is insanity. This is not progress. People say this is good because I can print my book into virgin red papyrus. I say it is bad exactly for the same reason. Also, I don’t want all my prints to be dated January 1, 1970.

                                                  2. 8

                                                    For me personally, I never chose Docker; it was chosen for me by my employer. I could maybe theoretically replace it with podman because it’s compatible with the same image format, which Guix (which is much better designed overall) is not. (But I don’t use the desktop docker stuff at all so I don’t really care that much; mostly I’d like to switch off docker-compose, which I have no idea whether podman can replace.)

                                                    1. 3

                                                      FWIW Podman does have a podman-compose functionality but it works differently. It uses k8s under the hood, so in that sense some people prefer it.

                                                    2. 2

                                                      This quite nicely sums up for me 😄 and more eloquently than I could put it.

                                                      1. 2

                                                        If you’re targeting Linux why aren’t you using a platform that supports running & building Linux software natively like Windows or even Linux?

                                                        1. 12

                                                          … to call WSL ‘native’ compared to running containers/etc via VMs on non-linux OS’s is a bit weird.

                                                          1. 11

                                                            I enjoy using a Mac, and it’s close enough that it’s almost never a problem. I was a Linux user for ~15 years and I just got tired of things only sorta-kinda working. Your experiences certainly might be different, but I find using a Mac to be an almost entirely painless experience. It also plays quite nicely with my iPhone. Windows isn’t a consideration, every time I sit down in front of a Windows machine I end up miserable (again, YMMV, I know lots of people who use Windows productively).

                                                            1. 3

                                                              Because “targeting Linux” really just means “running on a Linux server, somewhere” for many people and they’re not writing specifically Linux code - I spend all day writing Go on a mac that will eventually be run on a Linux box but there’s absolutely nothing Linux specific about it - why would I need Linux to do that?

                                                              1. 2

                                                                WSL2-based containers run a lightweight Linux install on top of Hyper-V. Docker for Mac runs a lightweight Linux install on top of xhyve. I guess you could argue that this is different because Hyper-V is a type-1 hypervisor, whereas xhyve is a type-2 hypervisor using the hypervisor framework that macOS provides, but I’m not sure that either really counts as more ‘native’.

                                                                If your development is not Linux-specific, then XNU provides a more complete and compliant POSIX system than WSL1, which are the native kernel POSIX interfaces for macOS and Windows, respectively.

                                                            2. 9

                                                              Prod runs containers, not Nix, and the goal is to run the exact same build artifacts in Dev that will eventually run in Prod.

                                                              1. 8

                                                                Lots of people distribute dockerfiles and docker-compose configurations. Podman and podman-compose can consume those mostly unchanged. I already understand docker. So I can both use things other people make and roll new things without using my novelty budget for building and running things in a container, which is basically a solved problem from my perspective.

                                                                Nix or Guix are new to me and would therefore consume my novelty budget, and no one has ever articulated how using my limited novelty budget that way would improve things for me (at least not in any way that has resonated with me).

                                                                Anyone else’s answer is likely to vary, of course. But that’s why I continue to choose dockerfiles and docker-compose files, whether it’s with docker or podman, rather than Nix or Guix.

                                                                1. 5

                                                                  Not mentioned in other comments, but you also get process / resource isolation by default on docker/podman. Sure, you can configure service networking, cgroups, namespaces on nix yourself, just like any other system and setup the relevant network proxying. But getting that prepackaged and on by default is very handy.

                                                                  1. 2

                                                                    You can get a good way there without much fuss with using the Declarative NixOS containers feature (which uses systemd-nspawn under the hood).

                                                                  2. 4

                                                                    I’m not very familiar with Nix, but I feel like a Nix-based option could do for you what a single container could do, giving you the reproducibility of environment. What I don’t see how to do is something comparable to creating a stack of containers, such as you get from Docker Compose or Docker Swarm. And that’s considerably simpler than the kinds of auto-provisioning and wiring up that systems like Kubernetes give you. Perhaps that’s what Nix Flakes are about?

                                                                    That said I am definitely feeling like Docker for reproducible developer environments is very heavy, especially on Mac. We spend a significant amount of time rebuilding containers due to code changes. Nix would probably be a better solution for this, since there’s not really an entire virtual machine and assorted filesystem layering technology in between us and the code we’re trying to run.

                                                                    1. 3

                                                                      Is Nix a container system…? I though it was a package manager?

                                                                      1. 3

                                                                        It’s not, but I understand the questions as “you can run a well defined nix configuration which includes your app or a container with your app; they’re both reproducible so why choose one of the over the other?”

                                                                      2. 1

                                                                        It’s possible to generate Docker images using Nix, at least, so you could use Nix for that if you wanted (and users won’t know that it’s Nix).

                                                                        1. 1

                                                                          These aren’t mutually exclusive. I run a few Nix VMs for self-hosting various services, and a number of those services are docker images provided by the upstream project that I use Nix to provision, configure, and run. Configuring Nix to run an image with hash XXXX from Docker registry YYYY and such-and-such environment variables doesn’t look all that different from configuring it to run a non-containerized piece of software.

                                                                        1. 21

                                                                          On Linux, Docker will continue to be free so usage of Podman might be for other reasons (the fact it doesn’t have a daemon is quite nice).

                                                                          Not clear to me if this works on Windows though, seems like it’s only macOS.

                                                                          1. 6

                                                                            (the fact it doesn’t have a daemon is quite nice)

                                                                            Wow this is a great feature. I’m going to have to check out podman now. I never really sweat the overhead of the daemon but in most cases I don’t need it, so doing without it and managing everything through my supervisor system would be fantastic.

                                                                            1. 20

                                                                              Yeah, it’s not even really about “overhead”, it’s about “there’s this long running thing that you ask to do everything on your behalf”. With podman, when you run a container, you are running it, it starts as a child process of the process that ran it, it has your permissions, etc. Which also makes it work with process supervisors in a way that docker doesn’t really… /usr/bin/docker is just an RPC client.

                                                                            2. 1

                                                                              Yeah, this is currently focussed on MacOS (and not yet compatible with the new M1 CPUs) but it should be roughly the same for Windows also as the approach Podman has taken is same for both Mac and Windows (leverage a linux VM for the Podman engine). I’d be really intersted to hear if anyone follows this on Windows and what the differences are.

                                                                              1. 1

                                                                                https://podman.io/getting-started/installation#windows doesn’t list machine. I guess there’s WSL though.

                                                                              1. 4

                                                                                Might want to take a look at this fun technique from RedHat for making your own distroless images (instead of relying on Google to do it, seeing as they haven’t updated Python 3 for years): http://crunchtools.com/devconf-2021-building-smaller-container-images/

                                                                                1. 1

                                                                                  This is actually quite interesting, and I didn’t know “distroless” was even a thing. I prefer Debian for my base images, but Redhat certainly has the muscle to get some steam behind this idea, and at the end of the day the focus is more about the application and not necessarily the OS so should theoretically be agnostic anyhow.

                                                                                  1. 1

                                                                                    I don’t think this is particularly RedHat-specific, you could probably implement the same thing with Debian, you just need the ability to install packages into a specific root directory? Which dpkg at least does.

                                                                                2. 2

                                                                                  I think that image still uses Python 3.7. Otherwise, it’s a good lightweight option.

                                                                                  1. 1

                                                                                    “Distroless” is an oxymoron. It might not be based on an existing well-known distribution but it’s still a distribution. You still rely on them to maintain the tooling that generates the image, receive security updates, and so forth.

                                                                                  1. 4

                                                                                    Calling Debian 11 “most up-to-date” when it was released 2 weeks ago feels slightly dishonest. For most of the next 3-4 years, it will probably not be most up-to-date.

                                                                                    It’s still a fine basis for many things, obviously.

                                                                                    1. 4

                                                                                      It will be the most up-to-date LTS distro until April 2022 (at which point I will update the article).

                                                                                    1. 10

                                                                                      I wish this explained what tac was at the start, but it seems it’s https://man7.org/linux/man-pages/man1/tac.1.html (cat of reversed lines).

                                                                                      1. 7

                                                                                        Sorry! There’s this at the end of the first paragraph but maybe it’s not as helpful as I hoped it would be?

                                                                                        For those that aren’t familiar with it, tac is a command-line utility to reverse the contents of a file

                                                                                        Happy to revise it with any better suggestions. Maybe I should add a full paragraph to explain what tac is and why you might use it - I’ve been told my posts are too long in the past :(

                                                                                        1. 6

                                                                                          Perhaps an older/wrong version was published, because that line isn’t on the page.

                                                                                          I think that sentence would do just fine.

                                                                                          1. 8

                                                                                            My apologies! It turns out my nginx caching configuration isn’t correctly purging when a page is updated. It should be good now - thanks for cluing me in to what was going on.

                                                                                      1. 4

                                                                                        Ended up rewriting this quite a bit with a lot more information than the previous iteration, and to update it with latest releases and changes.

                                                                                        1. 3

                                                                                          ansi-term is OK, but vterm is even better, these days: https://melpa.org/#/vterm

                                                                                          Also, as someone suggested, LSP makes Emacs into a pretty decent IDE. And Spacemacs makes it easier to get to IDE-like state without having to spend a vast amount of time on configuration.

                                                                                          1. 1

                                                                                            I’ll check out vterm. In what ways is it better?

                                                                                            1. 2

                                                                                              ansi-term has a lot more rendering issues. If you do screen or tmux inside ansi-term it’s less bad, but vterm suffers from this a lot less.

                                                                                              1. 3

                                                                                                vterm is also using a C library via FFI for a lot of its work, so it should be faster than ansi-term.

                                                                                          1. 4

                                                                                            I wish there were data structure/algorithm text books that took memory speed into account. There’s this huge divide between intro textbooks, which mostly just ignore memory speed, and up-to-date academic papers and actual practitioner implementations that do take it into account. (The two sometimes diverge, e.g. it’s not clear to me anyone uses cache-oblivious algorithms in the real world.)

                                                                                            1. 1

                                                                                              I like to think in terms of layers of stability. Application logic at the top, with a stack of libraries underneath, and a library is defined as an API that provides a stability guarantee. Often it’s third-party libraries and then they have their own tests. but sometimes it’s an internal library that’s only used in that one application, and then the stability guarantees can be much looser, but you likely still want to test it.

                                                                                              App logic
                                                                                              ----- library interface ---
                                                                                              unstable implementation goop
                                                                                              ----- library interface ---
                                                                                              unstable implementation goop
                                                                                              

                                                                                              Which is to say, you might still need tests for some internal APIs, but definitely not all due to the costs mentioned in the article.

                                                                                              1. 2

                                                                                                it’s an internal library that’s only used in that one application, and then the stability guarantees can be much looser, but you likely still want to test it.

                                                                                                I’d say it might be fine to just test it via an application, but yeah, if it is a library enough to have some sort of a interface, it’s better to test this interface. Basically, treat library like a layer from the layers section of the post.

                                                                                                I’ve just realize that I have an appropriate war story to share. In rust-analyzer, we originally started with keeping the syntax tree library in-tree. Than, at some point we’ve extracted it into a stand-alone rowan package. One problem with it though is that all the tests are still in the rust-analyzer repo. This actually is rather OK for my self: I can easily build rust-analyzer test suite against a custom rowan and see if it breaks. It does make contributing to the library rather finicky though for external contributors, as testing workflow becomes rather non-traditional.

                                                                                                1. 1

                                                                                                  I’ve made the mistake of testing all layers instead of only layers where stability was meaningful. So key I think is “where is stability a useful property”. Generalized version of “only test public APIs”, I think, since that’s not as meaningful in larger applications.

                                                                                                  1. 1

                                                                                                    Hm, I don’t think the mistake is necessary the subject of testing. It might be just the way tests are written.

                                                                                                    Here’s an example test which tests a very internal layer of rust-analyzer, but without stability implications:

                                                                                                    https://github.com/rust-analyzer/rust-analyzer/blob/5193728e1d650e6732a42ac5afd57609ff82e407/crates/hir_ty/src/tests/simple.rs#L91-L110

                                                                                                    It tests type-inference engine, and prints, for each expression, it’s type. Notably, this operates at the level of completely unstable and changing internal representation. However, because input is rust code, and output is an automatically updatable expectation, the test is actually independent of the specifics of type inference.

                                                                                              1. 1

                                                                                                Won’t you lose your root image, though, if spot.io makes a bad prediction?

                                                                                                1. 1

                                                                                                  See Root Volume persistence section. They create an ami based on your root image (plus root volume snapshots). That’s not to say you shouldn’t have backups, using something like litestream, mysql-backup, etc.

                                                                                                  1. 1

                                                                                                    I read that, but my assumption was they did that on migration, but maybe I misunderstood and they do it regularly?

                                                                                                    1. 1

                                                                                                      It seems to be both

                                                                                                  2. 1

                                                                                                    At least for Azure, you have a choice of VMs being destroyed or deallocated. If they’re deallocated, the disks persist (and you keep paying for them) and you can redeploy the VM. You can opt in to a notification that lets you do a graceful shutdown, as long as you can do so within 30 seconds. I’m tempted to try one of the 72-core machines for package building and have it throw the built packages at an NFS share from a file storage thing and use the local SSD (which is thrown away if the VM is deallocated) for all intermediate build products.

                                                                                                    1. 1

                                                                                                      I believe spot.io supports Azure and other providers, but I haven’t tried it myself.