Threads for hartzell

    1. 11

      I want to write a reply post using but who knows if I’ll get around to it. This is a really common pattern and it’s cool that it can be written but it’s too much boilerplate. It should be easier (and with my library, it is 😊).

      1. 3

        You should. If you don’t, I might (though I’d have to learn to use it first). I was thinking as I was writing this article that it’s only a matter of time before some library bundles all this boilerplate up into a nice interface using generics. I’ll have to spend some time learning your library.

        1. 1

          I started looking at the repo, and as a sanity check, I added a benchmark then ripped out concurrency, and did the benchmark again. It could be that I did something wrong in my tests, but I found that things went slightly faster with no concurrency at all. It’s pretty hard to get a speed up on pure CPU bound operations because the overhead of scheduling is so high.

          1. 1

            I’ll admit that I need better benchmarks, but I do have some basic timing information on the indexing phase of a directory with ~900 images (everything from lorem picsum). With a single worker it takes my laptop 3m 43s, with two workers it takes 2m 9s (I didn’t bother going higher–my laptop only has two cores). I wouldn’t call this test purely CPU bound, since a lot of the time goes into loading images.

            Of course, that’s just indexing, you’re probably right about the swapping phase. Caching resized tiles (even if it had to be on disk) would probably have a greater effect on performance.

            1. 1

              To be clear, I also found that it was faster to run 2 workers rather than 1. I think it leveled off by 8 workers. What got me the best performance though was to rip out the channels and just pass slices from one function to another instead. I was just looking at the testdata directory though, not a huge amount of data.

              I think it’s worth adding a benchmarking suite of some kind and figuring out empirically if you get better performance from having the concurrency just in one place but another, eg to read the files from disk but not to do the tiling.

              It’s a fun project because there are a lot avenues to explore. :-)

              1. 1

                This has been a very fun project. I wrote about the concurrency, but the color extraction and matching was probably more fun to work out (it probably wouldn’t be much fun to read about though).

                You’re absolutely right about the benchmarks. It’s on my list.

                But I don’t get the same behavior you do when I remove the concurrency code. It might have shaved off a few seconds from the workers=1 case, but it’s hard to say without better benchmarks because there’s some variation in my testing anyway.

                I don’t know how much time you want to spend on this, but here’s the exact code I ran: I DM’d you a link to the images I’ve been testing with.

      2. 2

        In addition to being a neat looking library, the flowmatic vs. stdlib examples in the flowmatic README are a nice little primer/refresher on how to approach these problems w/ the stdlib.

      3. 1

        I’d read that

    2. 2

      I appreciate the point of view in both the “why not…” and “relieve your…” articles, defining a simple happy path that makes work possible in most cases for the people are are least likely to be able to solve complicated problems and leaves escape paths for the cognoscenti.

      One thing that I cant’ figure out though is: “Where’s the advice on packaging?”

      The articles talk about where to get a python (OS,, using virtual environments via venv, installing packages via pip, and (in managing prerequisites via requirements.txt.

      What should people do, today, once their app is ready and they want to share it. How should they package it and upload it to a PyPi style repository so that other people can pip install it into a virtual environment?

      ~3 years back I mapped out the obstacles in my work world and defined a similar happy path that met my communities needs, making the easy things easy and the hard things possible (nod to Perl…). Poetry ended up being a central pillar at the time because of the ease/reliability of poetry build and poetry publish. It’s still working for us today, but perhaps I’d make a different decision in a green field.

      1. 1

        That’s because “packaging” is an overloaded word, that can mean:

        • installing and using a package
        • creating and sharing a lib on pypi
        • creating and deploying a service
        • creating and sharing an end user program

        Now by experience, most people claiming having “packaging problems” are in the first category. And the first category problems most of the time doesn’t come from packaging, but from bootstrapping: the errors appearing when you install are just symptoms.

        Hence the article wording and content.

        Now you seem to be in need of help for the the other 3 “packaging” questions.

        Each of them require at very list their own article. Probably more.

        1. 1

          That sounds spot on. Definitely enjoyed the reasoning in your solutions to the first bullet, as well as the summaries of common alternatives and their pitfalls.

          We followed similar reasoning with slightly different scope (users, needs) and ended up limiting python choices to minimize chaos, using Poetry to build/prepare/upload things onto an in-house GitLab-based pypi, and then using pip to “deploy” them. Works well enough for our needs but it’s still a tiptoe through a minefield.

          I’d love to read posts on the other topics if you’re ever so inclined!

    3. 4

      Here’s the definition I’ve been using:

      Software development is about using code to create value. Software engineering is doing that in the context of change.

      1. 4

        I prefer Rob Pike’s variation:

        Software engineering is what happens to programming when you add time and other programmers.

        1. 3

          I believe that credit for that in the Go community goes to Russ Cox, from his What is Software Engineering? essay. Russ goes on to cite Titus Winters (7 minutes of video here).

          1. 2

            Thanks for the correction. I had actually looked it up to check the reference, found the article you linked, but misread the opening paragraph as Russ quoting Rob Pike, which he was not doing.

            Titus Winter’s original:

            Software engineering is programming integrated over time

            And Russ gets credit for the variation I quoted.

            1. 2

              No problem. I’m fond of Russ’ quote. I lead a “Computational Engineering” team at a biotech, recruiters have told me that I should get the name changed because potential recruits can’t imagine their work.

              I finally settled on (giving Russ his nod):

              Computational Engineering is what happens to Bioinformatics and Infrastructure when they meet time and teammates.

      2. 2

        That’s 1 of the 2 important differences; the other is collaboration; software engineering involves working on a codebase with other people, which means you need to prioritize things like readability, integration tests, documentation, etc.

        1. 1

          That’s interesting, so do you think it’s impossible to do software engineering alone?

          1. 1

            Not impossible; just far less necessary (and atypical). There are rare cases of 1 person working on a project large & complex enough to be worth the effort of applying engineering principles (e.g. Austern Meyers’ X-Plane) but the vast majority of codebases with only 1 contributor are pretty small.

            1. 1

              My approach is, that a code written alone will be read by at least one person who thinks differently: the future you.

              1. 1

                That’s what the time factor is for.

    4. 15

      I think the key insight here is that container images (the article confuses images and containers, a common mistake that pedants like me will rush to point out) are very similar to statically linked binaries. So why Docker/container images and why not ELF or other statically linked formats?

      I think the main answer is that container images have a native notion of a filesystem, so it’s “trivial” (relatively speaking) to put the whole user space into a single image, which means that we can package virtually the entire universe of Linux user space software with a single static format whereas that is much harder (impossible?) with ELF.

      1. 4

        And we were able to do that with virtualization for at least 5 - 10 years prior Docker. Or you think that packaging also the kernel is too much?

        Anyways, I do not think that a container having the notion of a filesystem is the killer feature of Docker. I think that moving the deployment code (installing a library for example) close to compilation of the code helped many people and organizations who did not have the right tooling prior that. For larger companies who had systems engineers cgroups gave the security part mostly because packaging was solved decades prior to Docker.

        1. 1

          IMO it’s not the kernel but all of the supporting software that needs to be configured for VMs but which comes for ~free with container orchestration (process management, log exfiltration, monitoring, sshd, infrastructure-as-code, etc).

          Anyways, I do not think that a container having the notion of a filesystem is the killer feature of Docker. I think that moving the deployment code (installing a library for example) close to compilation of the code helped many people and organizations who did not have the right tooling prior that.

          How do you get that property without filesystem semantics? You can do that with toolchains that produce statically linked binaries, but many toolchains don’t support that and of those that do, many important projects don’t take advantage.

          Filesystem semantics enable almost any application to be packaged relatively easily in the same format which means orchestration tools like Kubernetes become more tenable for one’s entire stack.

      2. 4

        I can fit a jvm in a container! And then not worry about installing the right jvm in prod.

        I used to be a skeptic. I’ve been sold.

        1. 2

          Slightly off topic - but JVM inside a container becomes really interesting with resource limits. Who should be in charge of limits, JVM runtime or container runtime?

          1. 8

            Gotta be the container runtime (or the kernel or hypervisor above it) because the JVM heap size limit is best-effort. Bugs in memory accounting could cause the process to use memory beyond the heap limit. Absent that, native APIs (JNI) can directly call malloc and allocate off-heap.

            Would still make sense for the container runtime to tell the JVM & application what the limits on it currently are so it can tailor its own behaviour to try to fit inside them.

          2. 4

            It’s easy: the enclosing layer gets the limits. Who should set the resource limits? ext4 or the iron platter it’s on?

            1. 2

              What’s the enclosing layer? What happens when you have heterogenous infrastructure? Legacy applications moving to cloud? Maybe in theory it’s easy, but in practice much tougher.

          3. 2

            Increasingly the JVM is setting its own constraints to match the operating environment when “inside a container”.

      3. 4

        Yes, layers as filesystem snapshots enable a more expressive packaging solution than statically linked alternatives. But its not just filesystems, but also runtime configuration (variables through ENV, invocation through CMD) that makes the format even more expressive.

        p.s. I have also updated the post to say “container images”

      4. 3

        I think the abstraction on images is a bit leaky. With docker you’re basically forced to give it a name into a system registry, so that you can then run the image as a container.

        I would love to be able to say like… “build this image as this file, then spin up a container using this image” without the intermediate steps of tagging (why? because it allows for building workflows that don’t care about your current Docker state). I know you can just kinda namespace stuff but it really bugs me!

        1. 3

          Good practice is addressing images by their digest instead of a tag using the @ syntax. But I agree - registry has always been a weird part of the workflow.

          1. 1

            addressing images by their digest instead of a tag using the @ syntax.

            Be careful about that. The digest of images can change as you push/pull them between different registries. The problem may have settled out, but we were bitten by changes across different releases of software in Docker’s registry image and across the Docker registry and Artifactory’s.

            I’m not sure if there’s a formal standard for how that digest is calculated, but certainly used to be (~2 years back) be very unreliable.

          2. 1

            Oh I wasn’t aware of that! That could let me at least get most of the way to what I want to do, thanks for the pointer!

      5. 3

        I noticed Go now has support for, in its essentially static binary, including a virtual filesystem instantiated from a filesystem tree specified during compilation. In that scenario, it further occurs to me that containerization isn’t perhaps necessary, thereby exposing read only shared memory pages to the OS across multiple processes running the same binary.

        I don’t know in the containerization model if the underlying/orchestrating OS can identify identical read only memory pages and exploit sharing.

        1. 2

          I think in the long term containers won’t be necessary, but today there’s a whole lot of software and language ecosystems that don’t support static binaries (and especially not virtual filesystems) at all and there’s a lot of value in having a common package type that all kinds of tooling can work with.

        2. 2

          As a packaging mechanism, in theory embedded files in Go works ok (follows single process pattern). In practice, most Go binary container images are empty (FROM scratch + certs) anyways. Lots of files that are environment dependent that you would want at runtime (secrets, environment variables, networking) that are much easier to declaratively add to a container image vs. recompile.

      6. 2

        So why Docker/container images and why not ELF or other statically linked formats?

        There are things like gVisor and binctr that work this way, as do somethings like Emscripten (for JS/WASM)

        1. 2

          I really hope for WASI to pick up here. I used to be a big fan of CloudABI, which now links to WASI.

          It would be nice if we could get rid of all the container (well actually mostly Docker) cruft.

    5. 3

      I really like this on Linux hosts where I don’t have root or don’t want to pollute the globally installed packages.

      If it’s in your home directory, everything is compiled, so no random binaries, if you prefer to audit all of the build sources and instructions.

      1. 1

        Indeed! I use it two places that it’s become pivotal: on an ElementaryOS 5.x VM that’s using Ubuntu 18.04 as a base with some more recently dependencies required to work on this VM versus on a Mac that I’m otherwise using, and on a cluster for work that’s running RHEL 6 and won’t be upgraded anytime soon (but it’s being replaced by something running RHEL 7 by EOY).

      2. 1

        If you’re in the Linux world you should definitely take a look at the Spack project. If you’re in the Mac world it still might be worth a look.

        It’s basically a superset of Homebrew’s functionality. Great for building complex environments (having multiple versions of an application installed, fine grained control of build options, microarchitecture-specific compiler optimizations, etc…). And, […] Spack can also be used to handle simple single-user installations on your laptop..

        Nicely documented and great community!

    6. 3

      I’ve been playing with running emacs as a daemon and the one thing that I can’t figure out how to get to work “right” is project (python virtual environment)-specific processes.

      My emacs configuration boots quickly enough on my Mac laptops and most cloud environments I work in. I recently changed jobs and in my new AWS+EFS+corpAD world emacs often takes over a minute to start (discussed a bit with the author of the straight package manager. I think, based on a bit of strace output, that the delay is due to emacs walking down all of the directories on my load-path and trying to each file that it needs (many of which are in the system dir, which is at the end. The pathology is something about this environment, I stood up a personal instance with an EFS home dir and didn’t have this problem.

      In my current world the dev tools for various python projects are part of those project’s venv, so I end up having to kill and restart the daemon as I change what I’m doing.

      Is there a better way that I could be managing this?

    7. 1

      This is tangential, but thanks for the warning about the fan. I wonder if the last few generations of NUCs all have a fan that loud.

      1. 1

        I wonder if it’s possible to replace the stock fan with something more quiet? Or maybe an enclosure like this one:

    8. 1

      Any opinions on a good home setup for ZFS? (With ECC DRAM)

      1. 1

        Depends what you already know / want out of it - I’ve had some success with FreeNAS, but it assumes quite a bit about what you would like.

        1. 1

          I am mostly interested in protecting my data. Then there is cost. What are the options on the low end of cost?

          1. 2

            ECC DRAM is a complex set of standards, and it’s very easy to burn money on incompatible motherboards, especially at the cheap end of the scale.

            Ex-server hardware tends to be dirt cheap for this kind of thing (especially right now). If you can figure out a way to run it without the noise driving you mad (maybe fitting watercooling?), that’s going to be the cheapest way to get a compatible ECC setup.

          2. 2

            I’ve had good luck with HP microservers. I described it a while back, here:

    9. 2

      I’ve found that I split my time between the command-line and magit in emacs.

      1. 1

        Can I ask what things you use the command line for? Just curious, not doubting you or anything. I’ve used magit for 10+ years and I don’t think Ive used the git command line in 8+ years. Magit definitely introduced me to alot of git features that are so trivial to use and would be a nightmare to do via the command line (IMHO).

        I think I can say magit is my all time favorite piece of software. It kind of acts as the hub for all of my development. I should probably go donate again right now.

        1. 2


          I think that the driver is that I’ve reached my limit (buffer’s full, loosing neurons, etc..) in the number of magic short keystrokes that I can remember. I learned git on the command line before I discovered magit.

          I’ve never been big on aliases and shortcuts and etc, it’s easier for me to remember N+M things and how to combine them than it is to remember N*M aliases and etc…. I hang onto nouns and verbs easily (although git commands are, well, irregular…).

          I also like to be able to do these things on minimally-configured machines, on remote systems, and/or in other people’s accounts so leaning hard on any gui is a personal anti-pattern. I can even use vi in a pinch… ;)

          By way of a list:

          • I almost always stage and commit from a magit buffer (partial staging is a godsend).

          • If I interactive rebase, I do it in magit.

          • I probably create/switch branches about 40% of the time in magit.

          • I generally git init from the command line.

          • I have an alias, ‘git hist’ that I use to get an overview of the branching structure from the command line. I’ll sometimes poke around and do the equiv. in magit but not often enough that it sticks. From my ~/.gitconfig:

            # From Ricardo Signes
            hist = log --all --graph --color=always --pretty='[%C(cyan)%h%Creset]%C(bold cyan)%d%Creset %s'
            shist = log --graph --color=always --pretty='[%C(cyan)%h%Creset]%C(bold cyan)%d%Creset %s'
          • I git rm from the command line.

          • I nearly always git fetch and git merge from the command line.

          • git push seems to be split about 50/50, depending on what window is focused.

          1. 1

            Cool thanks for the explanation. That makes sense. My workstation is a remote machine at digital ocean that I use ssh(mosh) and tmux with emacs always running. So my computers just act as a browser and gateway to that workstation, where all the work actually happens. I don’t use other peoples computers ever so I think its easier for me to work the way I do.

            Have a good weekend!

          2. 1

            Interestingly, I am also split between magit and the command line, and where the splits occur for me are almost identical to this list! The only major thing I would add is that I tend to do all my remote adding/editing/removing from the command line as well.

            1. 1

              Oops, remotes…. Yep, command line (though my usage is pretty simple…)

              Great minds think alike!

        2. 1

          One other thought on rereading your question….

          alot of git features that are so trivial to use and would be a nightmare to do via the command line (IMHO).

          I strive to organize my work so that I rarely have to do anything nightmare-ish. Basic nvie style git flow and it’s derivatives.

          My only common dirty trick is when I forget to create a branch and drop a bunch of commits on my base branch. It happens often enough that I can do the recovery dance from memory and it’s not really nightmarish.

          I’m curious, what git features you use that are trivial in magit and really difficult on the command line. The only ones that come to mind for me are interactive rebase and staging/committing an unfortunate conglomerate of work in bits, which are both way easier in a “tui/gui” environment.

          1. 2

            I’m curious, what git features you use that are trivial in magit and really difficult on the command line.

            Not the parent, but for myself:

            Increasing/decreasing hunk sizes and selectively choosing what hunks to stage/unstage is the biggest one for me. I still use git a lot on the command line but its interface for doing this stuff is… not good. magit is stupid easy +/- (s)tage (u)nstage etc… also ediff and conflict resolution via magit is rather nice as well.

            1. 1

              Absolutely, that’s what I’d shorthanded as “partial staging….”

              1. 1

                Gotcha, wasn’t clear what you meant there. But another bit thats nice is undoing/redoing things from prior commits as well. Its all really well done in magit. As well as getting stuff out of stashes/etc….

                1. 1

                  Face it, it’s all nice :)

                  It’s been great to read what bits other folks use.

                  1. 1

                    Heh, sure, I use more than this to be clear but the overall hunk handling in magit is…. I think divine is the best word I can think of. I still use the command line though, I’ve a ton of old aliases for drilling through histories that magit really doesn’t help with. As well as just being able to deal with using git plain on systems where my local emacs setup isn’t available is also nice. I’ve never been big on getting so used to other tools that I’m incapacitated when I don’t have my preferred thing.

          2. 1

            I’m curious, what git features you use that are trivial in magit and really difficult on the command line.

            You mentioned staging/committing, but even for me, just seeing what is changed and staged is a big help. It’s better than “git status” because everything is nicely presented – you can collapse and expand sections as you want. You can even hit enter on a diff hunk to go to that hunk and edit it! As you mentioned, you can stage/commit, but I like it even when I want to commit everything, not just when I’m dealing with “an unfortunate conglomerate of work”. I like to review my changes before staging them, even when it’s not an especially complicated set of changes.

            1. 1

              Don’t want to leave the impression that ‘unfortunate conglomerates” is my general modus operandi, it’s definitely the oops.

              In the usual, neat, case I find myself splitting between exploring them in magit and just scrolling the output of git diff in a terminal window.

              1. 1

                Oh, definitely. It didn’t come across that way. I just wanted to point out that I find magit a better experience not just for unusual, disaster-recovery situations.

          3. 1

            I said “nightmare” above but that’s definitely an exaggeration. Most of these have been mentioned. But I think for me it comes down to not context switching. I can just do what I want with a couple keystrokes without dropping to the command line and remembering git syntax. Maybe that syntax is so ingrained for you guys, like the magit keystrokes are for me so, in the end it comes down to being comfortable.

            Interactive rebase (and the tons of features this provides), stashes, rebase and merge conflict resolution, cherry picking, hunk staging, visual representation of the entire project state.

            But we all just work different. I’m not trying to convince anyone to change their workflow. I was just curious. I’ve seen people here say they use diff or log in the command line for example. Which I find odd since instead of switching to a terminal and typing a command you can just press tab on a file and see the diff and press enter to go to that place in the file or s to stage it. Using the command line here just feels like extra work, especially if there are 5+ files you are diffing. There just seems to be way more value in using magit for this (IMHumbleO). Same with l l for log. But with using magit’s log you can scroll up and down, press enter on a commit, etc. Where as on the command line you have to look for SHA hashes to get the code in that commit, or count how many commits to use HEAD~x.

            They just feel like and extension of where I am. And none of the magit features seem, to me at least, to have any shortcomings or trade offs. It just feel likes a complete win.

            Sometimes I get ingrained in my workflow it seems like the only logical one, which is obviously not true and a dumb mindset for me to have. Thanks for sharing.

    10. 1

      Coincidentally, a working-at-home friend just asked me about adding a second device to his network to get better coverage in his home. He conveniently has cat-6 already run between one end and the other.

      I’m confused about the state of the art for devices switching between AP’s. One of the comments below mentions ‘802.11v roaming’ as if it’s a special thing that needs to be enabled (commenter was using OpenWRT).

      I suspect that my friend is currently using an ISP provided router as his AP; details pending.

      Is roaming exotic or a fairly standard part of the world?

    11. 7

      Avoid meshes if you can. You’ll want n access points, where n is an integer and depends on the area to cover. Connect those access points to the upstream using cabled ethernet.

      Mesh is fine if you want coverage, not so fine if you want capacity in a saturated environment. Every packet has to be sent more than once across the air, and the impact of that is worse than a doubling because if the way radio time is shared.

      Clueful friends of mine tend to use either Ubiquiti or Mikrotik. One has Cisco Aironets brought home from work when they replaced them with new radios. I have Mikrotik hardware myself, the oldest is 10+ years old at this point and still receiving OS upgrades. If you consider Mikrotik, look for metal hardware, not plastic. The metal is built to last.

      My own policy is to use those slow old APs forever, and to say that if something wants 100Mbps or more then that device needs an ethernet cable. That’s been a good policy for me in practice. For example it’s kept the WLAN free of bandwidth hogs, even if those hogs (like a few giant rsync jobs I run) aren’t time sensitive.

      1. 2

        [I asked an extended version of this in a different reply in this thread]

        Is there anything special you need to do to enable switching amongst the various access points as you wander around the house?

        1. 1

          Enable, no, but there are things you can do to improve quality of service. I use a Mikrotik feature called capsman, I understand that Ubiquiti and Cisco Aironet provide the same functionality. (I don’t know about other vendors, those three are just what friends of mine use.)

          The only thing to watch out for is really that you have to purchase APs from one vendor if you want the nice roaming. If you mix brands, TCP connections will be interrupted when you move through the house, and a moving station may remain connected to an AP for as long a that’s possible, not just for as long as that AP is the best choice.

      2. 1

        You’ll want n access points, where n is an integer and depends on the area to cover. Connect those access points to the upstream using cabled ethernet.

        If I could get ethernet to everywhere I want wifi, I wouldn’t need the wifi.

        1. 1

          That’s true of course, but isn’t it rather beyond the point? The goal is to get ethernet to enough points that the entire area has a good WLAN signal.

          When I installed my cables I strictly needed two APs, but I got three instead in order to simplify the cabling. More APs, but less work pulling cables.

      3. 1

        I don’t know if you’d call the environment saturated here in an urban road but mesh is working nicely. No dropouts, fast, covers everything, cheap. What sort of environment would cause it trouble?

        1. 2

          At one point 27 different WLANs were visible in what’s now our kitchen, two of them often with a great deal of traffic, and intuitively I think there was some other noise source, not sure what. That was usually good, occasionally saturated, and bulk transfer throughput would drop deep, even as low as 1Mbps. I cabled and now I don’t need pay attention to the spectral graph.

          I’ve worked in an office where over 50 WLANs from various departments and companies in the same building were visible. Some people did >100-gigabyte file transfers over our WLAN, so I expect our WLAN bothered the other companies as much as their us. The spectral graph was pure black.

          1. 1

            As of right now, I see 21 networks from a Macbook in my living room. 2 of those are even hotspots from the street, run by competing phone companies. It doesn’t help that many ISPs offer “homespots,” where customers who rent their router create several SSIDs – one for the user, and one for other customers of that ISP to use as a hotspot. So I guess mesh is not a good idea where I am.

            1. 2

              Well, most people don’t have a lot of guests who transmit a lot of traffic, so maybe?

              Still, I haven’t regretted the cable I had installed. Remember that you can simplify the problem, you don’t have to install the cable along the most difficult route.

    12. 7

      Nix seems… weird. I don’t quite grok its value. I mean, Rust pins everything to the Cargo.lock file, what sort of additional reproducibility guarantees does Nix give here? If it’s about the toolchain, if I need to pin a specific rustc version, can’t I ship just the Rust compiler in a Docker container or something?

      1. 8

        It lets you pin versions of native libraries, and also install any tools you need for tests etc. In a large rust project I work on we use Cargo.lock for rust dependencies, but I also have a nix file with:

        buildInputs = [
            shellcheck # for linter

        and another for load testing with:

        buildInputs = [
              coreutils # provides chmod
              postgresql # provides psql
              procps # provides pkill

        The load test is nice both because it’s reproducible, and because it’s easy to parameterize it across multiple versions of the upstream software.

      2. 4

        can’t I ship just the Rust compiler in a Docker container or something?

        It (or things like Spack) can also be manageable-as-code reproducible ways to build the complicated things that one then encapsulates in a container (or not, if containers don’t actually add any additional value in a given situation).

        If you have a Docker image that you built with a Docker file that starts off with yum update or pip install (without some sort of lockfile) or …, you’ll never be able rebuild exactly that Docker image and it’ll be all kinds of fun trying to figure what’s actually different in the new one.

        This matters a lot when you have an image and need to make “just one small change to it”.

        This turns out to be really useful for reproducible science (which is, I suppose, redundant).

      3. 3

        You could, but the nix method works on macs too. Rust and Cargo.lock doesn’t pin glibc or other system-level dependencies. I use this in another project of mine to pin sqlite3 for a Discord bot.

        1. 1

          Docker works on Macs too though? I figured the issue is when Rust links to glibc, this is where funny things can happen.

          1. 11

            Docker works on macs by running a VM. Nix works as a Darwin binary emitting Darwin binaries.

            1. 1

              But the VM is means to an end, right? Abstract away all those userland dependencies. I guess Nix makes them explicit, but isolated.

              Now I get the value though, thanks. If I really want my program to emit native stuff reproducibly Nix makes sense. That is, if I’m building something that needs to work on OSX natively, instead of just running on it.

              If I’m shipping containerized server apps to run god knows where Docker will be sufficient. As long as the place where those containers are built can be done reproducibly!

              1. 2

                I haven’t gone deep into containers, but nixpkgs has a dockerTools.buildImage function that I’ve used to build some really lean images atop alpine. If you end up with a nixified package, I’ve found it a good way to make a container for it.

                1. 3

                  The nice thing is that you don’t even need a base image such as Alpine. You can just start from scratch and then the Docker image will contain the transitive closure of the package specified in contents. So, the result is a fully self-sufficient container that only contains what is absolutely needed.

                  By the way, in most cases I would recommend dockerTools.buildLayeredImage, since it builds multiple layers based on Nix store paths, meaning that layers are often shared between images. See grahamc’s blog post on how the layers are constructed.

                  1. 2

                    Sometimes people think they might need a shell.

                    1. 2

                      Never needed really needed that, but that can be worked around. Both buildImage and buildLayeredImage take a list for contents, so add bash and perhaps coreutils. I am not denying that there are use cases where you want to use some base image though.

                      I like these small images without bash, coreutils, etc. I know container != security, but if someone manages to exploit the software it’s nice if they don’t have immediate access to a shell, etc.

                      At work, I provide these lean images to the people who deploy them in Kubernetes or whatever they use and I never had a single complaint.

      4. 1

        You can pin and rollback your whole deployed OS, or dev env, if you don’t use this, its less valuable.

    13. 2

      I’m not sure if I’m missing it or if it’s not there, but there doesn’t seem to be any way to serve the web interface over https. Don’t see any “ToDo” items either:

      Am I looking right past it?

      1. 2

        Yes, koushin assumes a reverse proxy is set up right now.

      2. 1

        It’s likely meant to be served from behind a webserver like NGINX or Apache with a reverse proxy; they would then handle TLS. That’s how I intend to run it.

      3. 1

        [edit: found an Echo recipe and it looks like Koushin isn’t wired up to call StartAutoTLS]

        Echo, the underlying web server says it supports “Automatic TLS via Let’s Encrypt”.

        I’m screwing around with it inside my NAT’ed house network, so any Acme bits aren’t going to work….

        This Echo recipe indicates that the server needs to be started via e.StartAutoTLS, but cmd/koushin/main.go starts it with e.Start, so it looks like it’s not supposed to be working at the moment.

    14. 3

      I’m curious (and send Christine an email) what a Docker image would look like if she followed one of the “minimal docker images that start from the ‘scratch’ image” recipes (e.g. this one).

      It forgoes the reproducibility achieved via Nix, but might be simpler.

      Anyone have any experience/comparisons?

      1. 4

        I actually asked about this in IRC, and the response was that she depends on a number of command line tools (like youtube-dl) and so it’s just much easier to use a non-minimal image for her use case.

        I’ve done this with some trivial projects, but I’ve found it really hard to use FROM scratch because if you ever need to make http requests out, you need a cert bundle. And if you ever need other tools, you need to make sure you have some sort of libc… and at that point, you might as well just keep alpine around at a bare minimum.

      2. 3

        My Alpine image is based on a lot of compromises and the like made over the years. I ended up using it as a “universal base” because I had spent so much time making sure it would work as a generic “do stuff” image. I use it for a lot more than just this site.

        However, Nix basically obsoleted it lol.

    15. 1

      I’m glad to see I’m not alone in the camp of running multiple backup methods on my Mac.

      I have my desktop and laptop configured to have Time Machine back up to my FreeNAS box. It works fine for a while but like the author after some time I get the dreaded popup that there’s a problem with the backup image and it needs to start from scratch. There are sometimes ways to fix it but it doesn’t always work. As far as I know this error only happens when using Time Machine over the network - where it uses a sparsebundle - and not when using local disk.

      I also have my machines back up using Arq and it’s been great. On my desktop Arq backs up my entire $HOME to FreeNAS and a subset of valuable data to Backblaze B2. My desktop also has a USB disk which SuperDuper! clones to every night as a full bootable backup.

      I’ve contemplated ditching Time Machine multiple times, but the thing is if you get a new machine or reinstall the OS, Time Machine is the only thing which can do a full bare metal restore at OS install time. I get all my applications, user config, etc. restored by Time Machine. Arq has all my user data but I can’t do that kind of restore.

      1. 1

        As far as I know this error only happens when using Time Machine over the network - where it uses a sparsebundle - and not when using local disk.

        That’s what I’ve seen. I think that the underlying cause is grabbing the laptop and leaving the house while it’s in the middle of a backup.

        1. 1

          That’s been my presumption too, although I haven’t proved it yet.

    16. 2

      I’ve also been warned by Time Machine that the plogiston’s have been phosphorylated, but that’s only ever occurred when using Time Machine to make backups on remote storage. Come to think of it, I haven’t been warned recently. Wonder if it’s not been a problem recently or if I just haven’t been warned.

      I’ve never seen it with local storage.

      My remote storage is AFS shares that live on FreeNAS servers (HP Microservers and an Odroid H2 system). My Macs are [still] running High Sierra.

      I agree that comparing remote Time Machine to local Arq is weird.

      I’m apparently also in the nervous-nellie category: my macs do remote Time Machine backups, update remote Carbon Copy Cloner clones every night and periodically update a rotating set of Time Machine backups on USB devices that get stored offsite.

      My main use case is putting the machine back together exactly the way that it was before it lost its mind. Time machine backups do this well and historically CCC clones would work too (TODO: haven’t tried it lately). Getting a new mac, creating a new user w/ the same name, and copying the home directory leaves a lot to be {done, desired}.

      Whatever your backup practice, it’s worth making a test run and seeing how well it works out.

    17. 3

      This author shows that for one website their nginx config needed two files, each with 24+ lines, much of which has to be generated with some other tools. The author doesn’t mention that nginx then requires the website to be enabled by linking them to the magic /etc/nginx/sites-enabled/ directory.

      In contrast, the author show’s their Caddy config is only one file for two websites, with less than 24 lines of config.

      This was what prompted me to switch to Caddy from nginx four years ago. I have about forty websites at any given time running on my machine. I found the Caddyfile blocks within a single config file was refreshing coming from nginx. My entire config file for all my websites is just 342 lines (many server blocks are just 7 lines of config). For me this was great not having to wrangle a hundred nginx config files and typing ln -s dozens of times.

      1. 3

        You can have everything in one file for nginx too.

      2. 2

        The compact, well documented and easy to read config is also the main reason I use caddy.

        Unlike other commenters, I also found it trivial to compile my own caddy for commercial use.

        1. 2

          It’s also fairly easy to automate/infrastructure-as-code compiling custom caddy’s. Here’s a personal FreeBSD Port that does so. As written it only supports the add-ons that I use but it would be easy to extend. It also predates FreeBSD Port’s support for Go modules.

      3. 2

        I thought sites-enabled was just a Debian thing, not Nginx itself?

    18. 8

      This might be a good place to ask:

      • What is the BSD equivalent of the ArchWiki?
      • What is the usability tradeoff between Docker and Jails?
      • In what ways (if at all) can users contribute their own ports and make them available to other users?
      • How is BSD for gaming these days?

      These are genuine questions because I have pretty little clue about the BSD world. Would be cool if somebody with experience could share some insight. :)

      1. 6

        What is the BSD equivalent of the ArchWiki?

        The handbook (which, incidentally, is very good).

        In what ways (if at all) can users contribute their own ports and make them available to other users?

        There’s not terribly good tooling for unofficial ports. They can be done, and have been done, but generally this will take the form of a whole alternate ports tree with a couple of changes.

        How is BSD for gaming these days?

        The main person working on this is myfreeweb (actually, I think he uses lobsters, so maybe he can say better than I can)–see here. The answer is ‘not great’, but also close to ‘quite good’. There is excellent driver support for nvidia and AMD GPUs. You can run most emulators natively. However, if you want to use wine, you will probably have to compile it yourself, because the versions in ports come in 32-bit-only and 64-bit-only varieties (no, they can’t co-exist), and you almost certainly want the version that can run both 32-bit and 64-bit apps. There is a linux emulator, but it can’t run steam (I did some work to try to get it running a while back, but it needs some work on the kernel side, which is too much commitment so I gave up on it), limiting its usefulness.

        1. 2

          Thanks! Do you know if there’s an easier way to contribute to the handbook than to write suggestions to the mailing list? Do you know if small user-to-user tips (for instance for rather specific hardware fixes) are allowed on the handbook? If not, where would those end up?

        2. 2

          @myfreeweb tags them with an email notification. If you’re replying to that person, leave off the @ so they don’t get hit with two emails for reply and @ mention notifications.

        3. 1

          I’m not really working on gaming all that much, the last really “gaming” thing I did was a RetroArch update (that still didn’t land in upstream ports…) For gaming, I usually just reboot into Windows.

      2. 2

        How is BSD for gaming these days?

        I’d say the biggest effort is being undertaken by the openbsd_gaming community. A good starting point is the subreddit, then you can follow the most active members on Twitter or Mastodon to get more updates

      3. 2

        In what ways (if at all) can users contribute their own ports and make them available to other users?

        1. You can submit a new port, e.g.
        2. You can update an existing port, e.g.
        3. You can set up a custom category that contains your additions to the ports tree (I have not tried this…), e.g.
        4. You can set up a poudriere build system to build your own packages from the ports tree, e.g. with non-standard settings (it’s harder to pronounce than do…), e.g. then use portshaker to merge it with the standard tree within the poudriere hierarchy, e.g.

        I’ve found building packages for the ports that I want, the way that I want and then installing/upgrading them with pkg is a much smoother experience than trying to install directly from ports and then upgrade using one of the ports management tools.

        1. 1

          Thanks for the answer. You seem quite knowledgeable. How do people share build scripts for software that may not be shipped in binary form but that you can build yourself locally if you have the data? I’m thinking about some NVIDIA projects (like OptiX) or some games. Basically, is there an AUR for FreeBSD anywhere? I checked your links and obviously ports can be shared amongst users but I’m just curious whether there’s an index for those user-contributed ports anywhere.

          1. 2

            Sorry for the delay, missed your reply/question.

            I’m not familiar with AUR, but assume you mean the Archlinux User Repository.

            I don’t know of anything similar in the BSD world. At the least-automatically-shared end, people write software that works on the BSD’s and distribute it with instructions on what it needs. At the most-automatically-shared end, people contribute “ports” to the FreeBSD Ports tree. Automation oriented folks like me end up with their own collection of ports that explicitly rely on the FreeBSD Ports tree. I don’t know of anything that formalizes either the discovery of, or dependence on, other people’s personal ports. It hasn’t ever been an issue for me.

            1. 1

              Alright, makes sense. Thanks

      4. 1

        I ran into a thing just yesterday - jails (can) get an IP of their own, which seems to be automatically added to the host’s interface, but they do not (and afaik cannot) get their own MAC. This is FreeNAS for me (with iocage) and this is a little annoying because my fritzbox router seems to have a problem with port forwards now. But maybe I’m wrong and just haven’t solved in properly.

        In this case with docker at least would be possible to just use PORTS/EXPOSE and use the host’s main ip.

        Apart from that I’ve never encountered problems with jails and found them really smooth to work with.

        1. 2

          You can give a jail a whole virtual network interface (epair/vnet) and then bridge it or whatever. You can also just use the host’s networking if you don’t need to isolate networking at all for that jail.

          1. 1

            thanks, that’s a good term to search for. I’m just a little surprised it (suddenly) doesn’t work anymore in my setup. My research so far has proven inconclusive with a lot of people saying that it can’t be done (in version X)

    19. 2

      Looks like you’re at wemake. I haven’t had a chance to play [yet], so thought I’d ask: how does the flake8 configuration get along with automated formatters? It’d be nice if it got along easily with yapf (or black, or …).

      1. 2

        I found a page that seems to have the information you’re looking for: Auto-formatters – wemake-python-styleguide. To summarize, wemake-python-styleguide is compatible with autopep8, isort, and add-trailing-comma, but it is not compatible with yapf or black.

        1. 2

          Thanks for finding that link, good reading. While they explain why black isn’t going to work,

          black itself is actually not compatible with PEP8 and flake8

          they only explain that they weren’t able to come up with a yapf configuration that matched their choices

          If you have a working configuration for both yapf and wemake-python-styleguide, please, let us know!

      2. 1

        Sorry, I have missed your question.

        We don’t support black or any other auto-formatters, but we are compatible with autopep8 and autoflake8 by design.

        Why don’t we support auto-formatters? Because we believe that developers should write code and pay maximum attention to the process. And a good linter eliminates the necessity of auto-formatters. Docs:

    20. 3

      FreeBSD 12.1 server at ARP Networks (big fan!).

      • Email (postfix, dovecot, rspamd, clamav, mailman, procmail) using past experience and ideas from here:
      • Web services via caddy
      • zfs snapshots and sync to a FreeNAS system at home via sanoid/syncoid.

      I update the base system via freebsd-update.

      I build my own set of packages from the ports tree with specific options set, via poudriere and install/update via pkg.

      1. 1

        Running OpenBSD here, but also on ARP (their servers in Frankfurt). Stable as a rock so far, no issues.

        Email is not nearly as important to me as it was a few years back, so I don’t really care if sending sometimes wouldn’t work. But in practice it always does anyway.