Threads for stephank

  1. 2

    This is toying with my emotions, because I don’t really use Erlang and friends, but now I get to miss out on using a tool with an absolutely fantastic name. 🙃 It sounds like so much fun to casually throw in conversations.

    1. 6

      It seems to me that if one is going to go that far off the beaten path (i.e. not just running “docker build”), then it would also be worth looking into Buildah, a flexible image build tool from the same group as Podman. Have you looked into Buildah yet? I haven’t yet used it in anger, but it looks interesting.

      1. 6

        +1000 for Buildah.

        No more dind crap in your CI.

        Lets you export your image in OCI format for, among other useful purposes, security scanning before pushing, etc.

        Overall much better than Docker’s build. Highly recommend you try it.

        1. 3

          Added looking into it to my todo list, thanks for the suggestion @mwcampbell and @ricardbejarano.

          1. 2

            Im intrigued, what do you use for security scanning the image?

            1. 4

              My (GitLab) CI for building container images is as follows:

              • Stage 1: lint Dockerfile with Hadolint.
              • Stage 2: perform static Dockerfile analysis with Trivy (in config mode) and TerraScan.
              • Stage 3: build with Buildah, export to a directory in the OCI format (buildah push myimage oci:./build, last time I checked, you can’t do this with the Docker CLI), pass that as an artifact for the following stages.
              • Stage 4a: look for known vulns within the contents of the image using Trivy (this time in image mode) and Grype.
              • Stage 4b: I also use Syft to generate the list of software in the image, along with their version numbers. This has been useful more times than I can remember, for filing bug reports, comparing a working and a broken image, etc.
              • Stage 5: if all the above passed, grab the image back into Buildah (buildah pull oci:./build, can’t do this with Docker’s CLI either) and push it to a couple of registries.

              The tools in stage 2 pick up most of the “security bad practices”. The tools in stage 4 give me the of known vulnerabilities in the image’s contents, along with their CVE, severity and whether there’s a fix in a newer release or not.

              Having two tools in both stages is useful because it increases coverage, as some tools pick up vulns that others don’t.

              Scanning before pushing lets me decide whether I want the new, surely vulnerable image over the old (which may or may not be vulnerable as well). I only perform this manual intervention on severities high and critical, though.

              1. 1

                Thanks for the response. What are your thoughts on which seem to replace both Gripe and Trivy?

                1. 1

                  I haven’t used it, can’t judge.

                  Thanks for showing it to me.

            2. 1

              I’ve never used dind, but have only used Jenkins and GitHub Actions. Is that a common thing?

              1. 1

                IIRC GitHub Actions already has a Docker daemon accessible from within the CI container. So you’re already using Docker in Whatever on your builds.

                There are many problems with running the Docker daemon within the build container, and IMO it’s not “correct”.

                A container image is just a filesystem bundle. There’s no reason you need a daemon for building one.

            3. 4

              I have not looked at it, but my understanding is that Podman’s podman build is a wrapper around Buildah. So as a first pass I assume podman build has similar features. It does actually have at least one feature that docker build doesn’t, namely volume mounts during builds.

              1. 2

                If I remember correctly, the Buildah documents specify that while yes - podman build is basically a wrapper around Buildah - it doesn’t expose the full functionality of Buildah, trying to be more of a simple wrapper for people coming from Docker. I can’t recall what specific functionality was hidden from the user, but it was listed in the docs.

            1. 25

              So ditch your version manager, and use … a different version manager? One that I have to find the specific nixpkgs SHA if I want a specific version of the package I need.

              Does this let me install an old version of Python AND the latest nodejs? I suspect not, but please correct me if this is possible.

              I’m personally very happy with asdf and its meta-version manager capabilities, but you do you.

              1. 22

                The article espouses discarding all package-specific version management. The hope is that multiple distinct packaging systems can be ignored in favor of a single holistic ports tree.

                $ nix-shell -p python27 nodejs_latest
                $ node --version
                $ python2 --version
                Python 2.7.18

                Close enough. All versions can be fine-tuned, but ultimately even the most pedantic versioning policy usually crumbles before Nix’s pragmatism; you probably don’t need “the latest” version of most packages.

                Note that asdf’s documentation itself admits that asdf does not aim for reproducibility. This means that while your configuration may reliably work for you on your machines, it may take others a long time to reproduce your results.

                1. 12

                  The article mentions how you can do that. In short:

                    pkgs = import <nixpkgs> { };
                  pkgs.mkShell {
                    buildInputs = [
                      pkgs.python27Full # old python
                      pkgs.nodejs-16_x. # latest node

                  If you want a specific version that is not present, you can create an overlay and change the version to whichever you need (see this diff as an example).

                  The title refers to the fact that, in my view, your current version manager does not do enough. It should also include tooling that you need to run the project and make sure each program’s dependency tree is explicitly marked.

                  1. 3

                    Sibling comments are correct but also point out only Python and Node.js versions that are current in Nixpkgs. This doesn’t necessarily differ from other package managers and repositories. At some point, old versions are kicked out, usually some time after upstream drops support.

                    What’s different however is that you can pin multiple versions of Nixpkgs as well. You can pull an old, unsupported Python version from an old Nixpkgs, and use the latest Nixpkgs for everything else, like Node.js.

                    But you’re right, still have to find that Nixpkgs SHA. (I don’t often do this, though? More often, I end up sticking to some last-known-working SHA.)

                    1. 2

                      Thank you. I realized my scenario wasn’t clear but you addressed it. Mixing multiple versions of nixpkgs is exactly what I was looking for.

                  1. 2

                    These are good points!

                    I’ve also found the ‘laziness’ of the compiler sometimes bites. You can define a function, then not use it (perhaps forgotten after some rewriting), and then the compiler will simply ignore it completely, beyond syntax errors. You can have nonsensical stuff in there, like accessing fields that don’t exist, and it’s not an error.

                    In general, Zig does some fun things, but I don’t really see why I would choose it over something else, except for its C interop. Completely manual memory management is a lot to ask, when the competition is Rust and C++.

                    1. 4

                      You can define a function, then not use it (perhaps forgotten after some rewriting), and then the compiler will simply ignore it completely, beyond syntax errors.

                      This has changed in the latest builds of master branch, now the compiler is much more strict and will even complain about unused variables. It’s all part of the big rewrite for self-hosted that is progressively being integrated with the current hybrid Zig / C++ compiler.

                    1. 24

                      Ancient UI? I’m actually incredibly impressed by the Fastmail web UI. It’s one of the fastest large web applications I can think of.

                      1. 8

                        I’m a bit miffed by that too. It feels way more polished and accessible than Gmails UI. Particularly if I want to modify any settings. I dread using Gmail’s settings UI

                        1. 2

                          I never meant to say that the UI is hard to use - just that it doesn’t look in line with the modern design principles employed by most websites/mobile apps. I agree that the configuration is much simpler than Gmail’s. As I said above - I think that the user experience (UX) is good, but they can throw a fresh coat of paint on the UI and flatten it a bit (what ProtonMail 4 recently did).

                          1. 4

                            I can only speak from my own perspective, but I sincerely hope they don’t do anything that you suggested (“fresh coat of paint” and “flatten it a bit”). In my opinion, “modern design” != “best design”. I love the way the Fastmail web UI looks and hope they don’t change it just for the sake of change. It looks and works great as it is.

                            (That said, I think the rest of your article was great!)

                        2. 4

                          It does feel dated in the age of mostly flat UIs, at least to me. I guess that’s subjective, but I certainly liked the UI of the Gmail, HEY and ProtonMail 4 more. The UX is good, though, I just think that Fastmail could use a fresh coat of paint.

                          1. 4

                            I’ve been a happy FastMail customer for years, but I never use the webmail UI, except when editing server side settings. IMAP lets me use my mail program of choice and work offline.

                            1. 4

                              Fastmail’s web UI is the #1 reason I’m currently using Fastmail. I love it. The app on the other hand… often the app is loading, I click on the “calendar” button, then the “archive” button pops up under my finger, causing an unknown email to be archived :-/

                            1. 1

                              Combining all of these in a project sounds much more complex, though. Especially in popular high level languages.

                              Maybe that’s just because we have lots of existing tooling for process management, RPC and service discovery.

                              1. 2

                                I’m just a simple vim user, but doing all-keyboard interaction in a single tabbed iTerm window for editing and running commands is why I haven’t been able to master VS Code or other GUI editors yet. I’ve tried to learn keyboard shortcuts, but that always falls flat at some point, and not touching the mouse is how I fixed my RSI-like symptoms. (Vim emulator plug-ins fall flat at simple things, from my experience.)

                                (Edit: Brought this up because I think it’s super important to have shell and editor close to eachother as well.)

                                Emacs looks wild. Sometimes I wonder if the grass really is greener there. I’m definitely not doing OP’s magic in vim. (But I’ve also never felt slow.)

                                1. 7

                                  I’m a vim user who has been using Emacs for about a year. It can feel slow with too many modules or really big files, but usually (with daemon mode) plenty snappy. Evil mode + a few hacks and I often can’t tell i’ve left. Still type vim to open files (using an alias).

                                  However there is no reason to use emacs as a code editor if you just want to try other apps like shells or org etc. It’s a lisp machine that happens to include a text editor. evil-org or evil mode on eshell work quite well in their own right.

                                  1. 1

                                    (Vim emulator plug-ins fall flat at simple things, from my experience.)

                                    FWIW if you do want to try to use VS Code with vi keybindings, I recommend because it’s quite accurate. It runs commands through an actual running copy of neovim. It is a large improvement over its main competitor, vscodevim. There are some infelicities but oh well, Macros work.

                                    Emacs looks wild. Sometimes I wonder if the grass really is greener there. I’m definitely not doing OP’s magic in vim. (But I’ve also never felt slow.)

                                    A warning since you mentioned RSI, if you do try switching, stick to a vi emulation mode like evil-mode or whatever. (There are at least 2 vi keybindings plugins for Emacs, I can’t remember what they’re called, they’re relatively accurate.) Do not try Emacs default keybindings. The sheer amount of chording they make you do all the time is hell on wrists.

                                  1. 1

                                    I agree that by and large flakes are a good thing. Having a standard layout for Nix repositories, proper versioning through lock files, and impurities removed are a huge step forward. However, I also agree with e.g. andir that it would probably have been better if work had been started with a pure-nix implementation to hash out the UI/UX. [1] It would have allowed faster iteration and more community members could have contributed to the design, because most people in the Nix community are more familiar with Nix than C++. Also, it would have decoupled Nix 2.4 from flakes.

                                    But it is always easy to criticize things in hindsight, it is often hard to anticipate how processes unfold. As you said, the Nix/nixpkgs/NixOS community has grown at a very fast pace, and we have to learn to deal with that as a community.

                                    At any rate, I think we are beyond the point of return and the best path forward is that people try flakes, submit bug reports, and getting it into a shape where it can ship. Perhaps UX gripes can be addressed in a future iteration using editions. I think there are now to many flakes out there to make very large breaking changes. I don’t want to recommend people to use flakes in production systems, since they are still unstable, but I have switched to flakes to build my main desktop months ago without any issues. I have also converted some of my projects to flakes, using flake-compat for compatibility with Nix 2.3.

                                    [1] Eelco Dolstra even made a nice flake-compat project, which can be used to evaluate flake.nix/flake.lock files for compatibility with pre-flake Nix versions.

                                    1. 1

                                      I always saw flakes as some sort of formalisation of Niv’s functionality, but guess that’s not it. Are flakes older than Niv? Did Niv not play part in the RFC process?

                                      1. 3

                                        I am not sure which was first, but the first Niv commit was on Jan 17, 2019, the flakes MVP was posted in October 2018.

                                        Niv overlaps with the flake locking mechanism. But flakes provide much more, such as a standard Nix repository API, and it enables pure evaluation (e.g. using impure functions such as builtins.currentSystem is not allowed).

                                    1. 2

                                      Rust people like to say Rust releases every six weeks, comparing to C++ how hard it is to use even C++17, but going forward, the days when you could use new Rust features within a year will be gone and won’t return. It is sad but a natural process.

                                      1. 2

                                        I think this will be more of a concern for packagers than developers.

                                        I don’t believe a lot of people that actually develop in any of the newer language like Rust or Go, or even older languages like Node.js, Python or Ruby, care about how their application is packaged. You either use the language ecosystem, provide binaries, or provide another standalone installation method.

                                        Packaging only becomes a concern once a packager shows interest, and then it’s too late for large changes to the build & release process.

                                        1. 4

                                          It may be true that developers don’t care, but users definitely do. So developers should care, once they acquired enough users.

                                        2. 2

                                          I’m still targeting the latest stable only, and get away with it. At Cloudflare we update Rust the day it’s released, and it’s in production within a week.

                                          1. 3

                                            I used C++17 at work in 2018, but no, that’s not representative. I think it will be the same for Rust.

                                            1. 1

                                              At Cloudflare we update Rust the day it’s released, and it’s in production within a week.

                                              How many engineers work with Rust there? (compared to a small startup).

                                              1. 2

                                                At least 3 startups worth of engineers ;) It’s now powering many critical components and user-facing services.

                                                Since Cloudbleed things are moving to Rust where possible. Rust is regularly picked for new projects (Golang and some other memory-safe languages are used too).

                                                So far compiler upgrades haven’t caused any major problems. The biggest issue was regression in compilation speed in 1.46.

                                          1. 37

                                            Hello, I am here to derail the Rust discussion before it gets started. The culprit behind sudo’s vast repertoire of vulnerabilities, and more broadly of bugs in general, is accountable almost entirely to one matter: its runaway complexity.

                                            We have another tool which does something very similar to sudo which we can compare with: doas. The portable version clocks in at about 500 lines of code, its man pages are a combined 157 lines long, and it has had two CVEs (only one of which Rust would have prevented), or approximately one every 30 months.

                                            sudo is about 120,000 lines of code (100x more), its had 140 CVEs, or about one every 2 months since the CVE database came into being 21 years ago. Its man pages are about 10,000 lines and include the following:

                                            $ man sudoers | grep -C1 despair
                                            The sudoers file grammar will be described below in Extended Backus-Naur
                                            Form (EBNF).  Don't despair if you are unfamiliar with EBNF; it is fairly
                                            simple, and the definitions below are annotated.

                                            If you want programs to be more secure, stable, and reliable, the key metric to address is complexity. Rewriting it in Rust is not the main concern.

                                            1. 45

                                              its had 140 CVEs

                                              Did you even look at that list? Most of those are not sudo vulnerabilities but issues in sudo configurations distros ship with. The actual list is more like 39, and a number of them are “disputed” and most are low-impact. I didn’t do a full detailed analysis of the issues, but the implication that it’s had “140 security problems” is simply false.

                                              sudo is about 120,000 lines of code

                                              More like 60k if you exclude the regress (tests) and lib directories, and 15k if you exclude the plugins (although the sudoers plugin is 40k lines, which most people use). Either way, it’s at least half of 120k.

                                              Its man pages are about 10,000 lines and include the following:

                                              12k, but this also includes various technical documentation (like the plugin API); the main documentation in sudoers(1) is 741 lines, and sudoers(5) is 3,255 lines. Well under half of 10,000.

                                              We have another tool which does something very similar to sudo which we can compare with: doas.

                                              Except that it only has 10% of the features, or less. This is good if you don’t use them, and bad if you do. But I already commented on this at HN so no need to repeat that here.

                                              1. 12

                                                You’re right about these numbers being a back-of-the-napkin analysis. But even your more detailed analysis shows that the situation is much graver with sudo. I am going to include plugins, becuase if they ship, they’re a liability. And their docs, because they felt the need to write them. You can’t just shove the complexity you don’t use and/or like under the rug. Heartbleed brought the internet to its knees because of a vulnerability in a feature no one uses.

                                                And yes, doas has 10% of the features by count - but it has 99% of the features by utility. If you need something in the 1%, what right do you have to shove it into my system? Go make your own tool! Your little feature which is incredibly useful to you is incredibly non-useful to everyone else, which means fewer eyes on it, and it’s a security liability to 99% of systems as such. Not every feature idea is meritous. Scope management is important.

                                                1. 9

                                                  it has 99% of the features by utility

                                                  Citation needed.

                                                  what right do you have to shove it into my system?

                                                  Nobody is shoving anything into your system. The sudo maintainers have the right to decide to include features, and they’ve been exercising that right. You have the right to skip sudo and write your own - and you’ve been exercising that right too.

                                                  Go make your own tool!

                                                  You’re asking people to undergo the burden of forking or re-writing all of the common functionality of an existing tool just so they can add their one feature. This imposes a great cost on them. Meanwhile, including that code or feature into an existing tool imposes only a small (or much smaller) cost, if done correctly - the incremental cost of adding a new feature to an existing system.

                                                  The key phrase here is “if done correctly”. The consensus seems to be that sudo is suffering from poor engineering practices - few or no tests, including with the patch that (ostensibly) fixes this bug. If your software engineering practices are bad, then simpler programs will have fewer bugs only because there’s less code to have bugs in. This is not a virtue. Large, complex programs can be built to be (relatively) safe by employing tests, memory checkers, good design practices, good architecture (which also reduces accidental complexity) code reviews, and technologies that help mitigate errors (whether that be a memory-safe GC-less language like Rust or a memory-safe GC’ed language like Python). Most features can (and should) be partitioned off from the rest of the design, either through compile-time flags or runtime architecture, which prevents them from incurring security or performance penalties.

                                                  Software is meant to serve the needs of users. Users have varied use-cases. Distinct use-cases require more code to implement, and thereby incur complexity (although, depending on how good of an engineer one is, additional accidental complexity above the base essential complexity may be added). If you want to serve the majority of your users, you must incur some complexity. If you want to still serve them, then start by removing the accidental complexity. If you want to remove the essential complexity, then you are no longer serving your users.

                                                  The sudo project is probably designed to serve the needs of the vast majority of the Linux user-base, and it succeeds at that, for the most part. doas very intentionally does not serve the needs of the vast majority of the linux user-base. Don’t condemn a project for trying to serve more users than you are.

                                                  Not every feature idea is meritous.

                                                  Serving users is meritous - or do you disagree?

                                                  1. 6

                                                    Heartbleed brought the internet to its knees because of a vulnerability in a feature no one uses.

                                                    Yes, but the difference is that these are features people actually use, which wasn’t the case with Heartleed. Like I mentioned, I think doas is great – I’ve been using it for years and never really used (or liked) sudo because I felt it was far too complex for my needs, before doas I just used su. But I can’t deny that for a lot of other people (mainly organisations, which is the biggest use-case for sudo in the first place) these features are actually useful.

                                                    Go make your own tool! Your little feature which is incredibly useful to you is incredibly non-useful to everyone else

                                                    A lot of these things aren’t “little” features, and many interact with other features. What if I want doas + 3 flags from sudo + LDAP + auditing? There are many combinations possible, and writing a separate tool for every one of them isn’t really realistic, and all of this also required maintenance and reliable consistent long-term maintainers are kind of rare.

                                                    Scope management is important.

                                                    Yes, I’m usually pretty explicit about which use cases I want to solve and which I don’t want to solve. But “solving all the use cases” is also a valid scope. Is this a trade-off? Sure. But everything here is.

                                                    The real problem isn’t so much sudo; but rather that sudo is the de-facto default in almost all Linux distros (often installed by default, too). Ideally, the default should be the simplest tool which solves most of the common use cases (i.e. doas), and people with more complex use cases can install sudo if they need it. I don’t know why there aren’t more distros using doas by default (probably just inertia?)

                                                    1. 0

                                                      What if I want doas + 3 flags from sudo + LDAP + auditing?

                                                      Tough shit? I want a pony, and a tuba, and barbie doll…

                                                      But “solving all the use cases” is also a valid scope.

                                                      My entire thesis is that it’s not a valid scope. This fallacy leads to severe and present problems like the one we’re discussing today. You’re begging the question here.

                                                      1. 4

                                                        Tough shit? I want a pony, and a tuba, and barbie doll…

                                                        This is an extremely user-hostile attitude to have (and don’t try claiming that telling users with not-even-very-obscure use-cases to write their own tools isn’t user-hostile).

                                                        I’ve noticed that some programmers are engineers that try to build tools to solve problems for users, and some are artists that build programs that are beautiful or clever, or just because they can. You appear to be one of the latter, with your goal being crafting simple, beautiful systems. This is fine. However, this is not the mindset that allows you to build either successful systems (in a marketshare sense) or ones that are useful for many people other than yourself, for previously-discussed reasons. The sudo maintainers are trying to build software for people to use. Sure, there’s more than one way to do that (integration vs composition), but there are ways to do both poorly, and claiming the moral high ground for choosing simplicity (composition) is not only poor form but also kind of bad optics when you haven’t even begun to demonstrate that it’s a better design strategy.

                                                        My entire thesis is that it’s not a valid scope.

                                                        A thesis which you have not adequately defended. Your statements have amounted to “This bug is due to sudo’s complexity which is driven by the target scope/number of features that it has”, while both failing to provide any substantial evidence that this is the case (e.g. showing that sudo’s bugs are due to feature-driven essential complexity alone, and not use of a memory-unsafe language, poor software engineering practices (which could lead to either accidental complexity or directly to bugs themselves), or simple chance/statistics) and not actually providing any defense for the thesis as stated. Assume that @arp242 didn’t mean “all” the usecases, but instead “the vast majority” of them - say, enough that it works for 99.9% of users. Why is this “invalid”, exactly? It’s easy for me to imagine the argument being “this is a bad idea”, but I can’t imagine why you would think that it’s logically incoherent.

                                                        Finally, you have repeatedly conflated “complexity” and “features”. Your entire argument is, again, invalid if you can’t show that sudo’s complexity is purely (or even mostly) essential complexity, as opposed to accidental complexity coming from being careless etc.

                                                  2. 9

                                                    I dont’t think “users (distros) make a lot of configuration mistakes” is a good defence when arguing if complexity is the issue.

                                                    But I do agree about feature set. And I feel like arguing against complexity for safety is wrong (like ddevault was doing), because systems inevitably grow complex. We should still be able to build safe, complex systems. (Hence why I’m a proponent of language innovation and ditching C.)

                                                    1. 11

                                                      I dont’t think “users (distros) make a lot of configuration mistakes” is a good defence when arguing if complexity is the issue.

                                                      It’s silly stuff like (ALL : ALL) NOPASSWD: ALL. “Can run sudo without a password” seems like a common theme: some shell injection is found in the web UI and because the config is really naïve (which is definitely not the sudo default) it’s escalated to root.

                                                      Others aren’t directly related to sudo configuration as such; for example this one has a Perl script which is run with sudo that can be exploited to run arbitrary shell commands. This is also a common theme: some script is run with sudo, but the script has some vulnerability and is now escalated to root as it’s run with sudo.

                                                      I didn’t check all of the issues, but almost all that I checked are one of the above; I don’t really see any where the vulnerability is caused directly by the complexity of sudo or its configuration; it’s just that running anything as root is tricky: setuid returns 432 results, three times that of sudo, and I don’t think that anyone can argue that setuid is complex or that setuid implementations have been riddled with security bugs.

                                                      Other just mention sudo in passing by the way; this one is really about an unrelated remote exec vulnerability, and just mentions “If QCMAP_CLI can be run via sudo or setuid, this also allows elevating privileges to root”. And this one isn’t even about sudo at all, but about a “sudo mode” plugin for TYPO3, presumably to allow TYPO3 users some admin capabilities without giving away the admin password. And who knows why this one is even returned in a search for “sudo” as it’s not mentioned anywhere.

                                                      1. 3

                                                        it’s just that running anything as root is tricky: setuid returns 432 results, three times that of sudo

                                                        This is comparing apples to oranges. setuid affects many programs, so obviously it would have more results than a single program would. If you’re going to attack my numbers than at least run the same logic over your own.

                                                        1. 2

                                                          It is comparing apples to apples, because many of the CVEs are about other program’s improper sudo usage, similar to improper/insecure setuid usage.

                                                          1. 2

                                                            Well, whatever we’re comparing, it’s not making much sense.

                                                            1. If sudo is hard to use and that leads to security problems through its misusage, that’s sudo’s fault. Or do you think that the footguns in C are not C’s fault, either? I thought you liked Rust for that very reason. For this reason the original CVE count stands.
                                                            2. But fine, let’s move on on the presumption that the original CVE count is not appropriate to use here, and instead reference your list of 39 Ubuntu vulnerabilities. 39 > 2, Q.E.D. At this point we are comparing programs to programs.
                                                            3. You now want to compare this with 432 setuid results. You are comparing programs with APIs. Apples to oranges.

                                                            But, if you’re trying to bring this back and compare it with my 140 CVE number, it’s still pretty damning for sudo. setuid is an essential and basic feature of Unix, which cannot be made any smaller than it already is without sacrificing its essential nature. It’s required for thousands of programs to carry out their basic premise, including both sudo and doas! sudo, on the other hand, can be made much simpler and still address its most common use-cases, as demonstrated by doas’s evident utility. It also has a much smaller exposure: one non-standard tool written in the 80’s and shunted along the timeline of Unix history every since, compared to a standardized Unix feature introduced by DMR himself in the early 70’s. And setuid somehow has only 4x the number of footgun incidents? sudo could do a hell of a lot better, and it can do so by trimming the fat - a lot of it.

                                                            1. 3

                                                              If sudo is hard to use and that leads to security problems through its misusage, that’s sudo’s fault.

                                                              It’s not because it’s hard to use, it’s just that its usage can escalate other more (relatively) benign security problems, just like setuid can. This is my point, as a reply to stephank’s comment. This is inherent to running anything as root, with setuid, sudo, or doas, and why we have capabilities on Linux now. I bet that if doas would be the default instead of sudo we’d have a bunch of CVEs about improper doas usage now, because people do stupid things like allowing anyone to run anything without password and then write a shitty web UI in front of that. That particular problem is not doas’s (or sudo’s) fault, just as cutting myself with the kitchen knife isn’t the knife’s fault.

                                                              reference your list of 39 Ubuntu vulnerabilities. 39 > 2, Q.E.D.

                                                              Yes, sudo has had more issues in total; I never said it doesn’t. It’s just a lot lower than what you said, and quite a number are very low-impact, so I just disputed the implication that sudo is a security nightmare waiting to happen: it’s track record isn’t all that bad. As always, more features come with more (security) bugs, but use cases do need solving somehow. As I mentioned, it’s a trade-off.

                                                              sudo, on the other hand, can be made much simpler and still address its most common use-cases, as demonstrated by doas’s evident utility

                                                              We already agreed on this yesterday on HN, which I repeated here as well; all I’m adding is “but sudo is still useful, as it solves many more use cases” and “sudo isn’t that bad”.

                                                              Interesting thing to note: sudo was removed from OpenBSD by; who is also the sudo maintainer. I think he’ll agree that “sudo is too complex for it to the default”, which we already agree on, but not that sudo is “too complex to exist”, which is where we don’t agree.

                                                              Could sudo be simpler or better architectured to contain its complexity? Maybe. I haven’t looked at the source or use cases in-depth, and I’m not really qualified to make this judgement.

                                                      2. 5

                                                        I think arguing against complexity is one of the core principles of UNIX philosophy, and it’s gotten us quite far on the operating system front.

                                                        If simplicity was used in sudo, this particular vulnerability would not have been possible to trigger it: why have sudoedit in the first place, which just implies the -e flag? This statement is a guarantee.

                                                        If it would’ve ditched C, there is no guarantee that this issue wouldn’t have happened.

                                                      3. 2

                                                        Did you even look at that list? Most of those are not sudo vulnerabilities but issues in sudo configurations distros ship with.

                                                        If even the distros can’t understand the configuration well enough to get it right, what hope do I have?

                                                      4. 16

                                                        OK maybe here’s a more specific discussion point:

                                                        There can be logic bugs in basically any language, of course. However, the following classes of bugs tend to be steps in major exploits:

                                                        • Bounds checking issues on arrays
                                                        • Messing around with C strings at an extremely low level

                                                        It is hard to deny that, in a universe where nobody ever messed up those two points, there are a lot less nasty exploits in the world in systems software in particular.

                                                        Many other toolchains have decided to make the above two issues almost non-existent through various techniques. A bunch of old C code doesn’t handle this. Is there not something that can be done here to get the same productivity and safety advantages found in almost every other toolchain for tools that form the foundation of operating computers? Including a new C standard or something?

                                                        I can have a bunch of spaghetti code in Python, but turning that spaghetti into “oh wow argv contents ran over some other variables and messed up the internal state machine” is a uniquely C problem, but if everyone else can find solutions, I feel like C could as well (including introducing new mechanisms to the language. We are not bound by what is printed in some 40-year-old books, and #ifdef is a thing).

                                                        EDIT: forgot to mention this, I do think that sudo is a bit special given that its default job is to take argv contents and run them. I kinda agree that sudo is a bit special in terms of exploitability. But hey, the logic bugs by themselves weren’t enough to trigger the bug. When you have a multi-step exploit, anything on the path getting stopped is sufficient, right?

                                                        1. 14

                                                          +1. Lost in the noise of “but not all CVEs…” is the simple fact that this CVE comes from an embarrassing C string fuckup that would be impossible, or at least caught by static analysis, or at very least caught at runtime, in most other languages. If “RWIIR” is flame bait, then how about “RWIIP” or at least “RWIIC++”?

                                                          1. 1

                                                            I be confused… what does the P in RWIIP mean?

                                                            1. 3


                                                              1. 1

                                                                Python? Perl? Prolog? PL/I?

                                                              2. 2

                                                                Probably Python, given the content of the comment by @rtpg. Python is also memory-safe, while it’s unclear to me whether Pascal is (a quick search reveals that at least FreePascal is not memory-safe).

                                                                Were it not for the relative (accidental, non-feature-providing) complexity of Python to C, I would support RWIIP. Perhaps Lua would be a better choice - it has a tiny memory and disk footprint while also being memory-safe.

                                                                1. 2

                                                                  Probably Python, given the content of the comment by @rtpg. Python is also memory-safe, while it’s unclear to me whether Pascal is (a quick search reveals that at least FreePascal is not memory-safe).

                                                                  That’s possibly it.

                                                                  Perhaps Lua would be a better choice - it has a tiny memory and disk footprint while also being memory-safe.

                                                                  Not to mention that Lua – even when used without LuaJIT – is simply blazingly fast compared to other scripting languages (Python, Perl, &c)!

                                                                  For instance, see this benchmark I did sometime ago: I had implemented Ackermann’s function in various languages (the “./ack” file is the one in C) to get a rough idea on their execution speed, and lo and behold Lua turned out to be second only to the C implementation.

                                                          2. 15

                                                            I agree that rewriting things in Rust is not always the answer, and I also agree that simpler software makes for more secure software. However, I think it is disingenuous to compare the overall CVE count for the two programs. Would you agree that sudo is much more widely installed than doas (and therefore is a larger target for security researchers)? Additionally, most of the 140 CVEs linked were filed before October 2015, which is when doas was released. Finally, some of the linked CVEs aren’t even related to code vulnerabilities in sudo, such as the six Quest DR Series Disk Backup CVEs (example).

                                                            1. 4

                                                              I would agree that sudo has a bigger target painted on its back, but it’s also important to acknowledge that it has a much bigger back - 100× bigger. However, I think the comparison is fair. doas is the default in OpenBSD and very common in NetBSD and FreeBSD systems as well, which are at the heart of a lot of high-value operations. I think it’s over the threshold where we can consider it a high-value target for exploitation. We can also consider the kinds of vulnerabilities which have occured internally within each project, without comparing their quantity to one another, to characterize the sorts of vulnerabilities which are common to each project, and ascertain something interesting while still accounting for differences in prominence. Finally, there’s also a bias in the other direction: doas is a much simpler tool, shipped by a team famed for its security prowess. Might this not dissuade it as a target for security researchers just as much?

                                                              Bonus: if for some reason we believed that doas was likely to be vulnerable, we could conduct a thorough audit on its 500-some lines of code in an hour or two. What would the same process look like for sudo?

                                                            2. 10

                                                              So you’re saying that 50% of the CVEs in doas would have been prevented by writing it in Rust? Seems like a good reason to write it in Rust.

                                                              1. 11

                                                                Another missing point is that Rust is only one of many memory safe languages. Sudo doesn’t need to be particularly performant or free of garbage collection pauses. It could be written in your favorite GCed language like Go, Java, Scheme, Haskell, etc. Literally any memory safe language would be better than C for something security-critical like sudo, whether we are trying to build a featureful complex version like sudo or a simpler one like doas.

                                                                1. 2

                                                                  Indeed. And you know, Unix in some ways have been doing this for years anyway with Perl, python and shell scripts.

                                                                  1. 2

                                                                    I’m not a security expert, so I’m be happy to be corrected, but if I remember correctly, using secrets safely in a garbage collected language is not trivial. Once you’ve finished working with some secret, you don’t necessarily know how long it will remain in memory before it’s garbage collected, or whether it will be securely deleted or just ‘deallocated’ and left in RAM for the next program to read. There are ways around this, such as falling back to manual memory control for sensitive data, but as I say, it’s not trivial.

                                                                    1. 2

                                                                      That is true, but you could also do the secrets handling in a small library written in C or Rust and FFI with that, while the rest of your bog-standard logic not beholden to the issues that habitually plague every non-trivial C codebase.

                                                                      1. 2


                                                                        Besides these capabilities, ideally a language would also have ways of expressing important security properties of code. For example, ways to specify that a certain piece of data is secret and ensure that it can’t escape and is properly overwritten when going out of scope instead of simply being dropped, and ways to specify a requirement for certain code to use constant time to prevent timing side channels. Some languages are starting to include things like these.

                                                                        Meanwhile when you try to write code with these invariants in, say, C, the compiler might optimize these desired constraints away (overwriting secrets is a dead store that can be eliminated, the password checker can abort early when the Nth character of the hash is wrong, etc) because there is no way to actually express those invariants in the language. So I understand that some of these security-critical things are written in inline assembly to prevent these problems.

                                                                        1. 1

                                                                          overwriting secrets is a dead store that can be eliminated

                                                                          I believe that explicit_bzero(3) largely solves this particular issue in C.

                                                                          1. 1

                                                                            Ah, yes, thanks!

                                                                            It looks like it was added to glibc in 2017. I’m not sure if I haven’t looked at this since then, if the resources I was reading were just not up to date, or if I just forgot about this function.

                                                                2. 8

                                                                  I do think high complexity is the source of many problems in sudo and that doas is a great alternative to avoid many of those issues.

                                                                  I also think sudo will continue being used by many people regardless. If somebody is willing to write an implementation in Rust which might be just as complex but ensures some level of safety, I don’t see why that wouldn’t be an appropriate solution to reducing the attack surface. I certainly don’t see why we should avoid discussing Rust just because an alternative to sudo exists.

                                                                  1. 2

                                                                    Talking about Rust as an alternative is missing the forest for the memes. Rust is a viral language (in the sense of internet virality), and a brain worm that makes us all want to talk about it. But in actual fact, C is not the main reason why anything is broken - complexity is. We could get much more robust and reliable software if we focused on complexity, but instead everyone wants to talk about fucking Rust. Rust has its own share of problems, chief among them its astronomical complexity. Rust is not a moral imperative, and not even the best way of solving these problems, but it does have a viral meme status which means that anyone who sees through its bullshit has to proactively fend off the mob.

                                                                    1. 32

                                                                      But in actual fact, C is not the main reason why anything is broken - complexity is.

                                                                      Offering opinions as facts. The irony of going on to talk about seeing through bullshit.

                                                                      1. 21

                                                                        I don’t understand why you hate Rust so much but it seems as irrational as people’s love for it. Rust’s main value proposition is that it allows you to write more complex software that has fewer bugs, and your point is that this is irrelevant because the software should just be less complex. Well I have news for you, software is not going to lose any of its complexity. That’s because we want software to do stuff, the less stuff it does the less useful it becomes, or you have to replace one tool with two tools. The ecosystem hasn’t actually become less complex when you do that, you’re just dividing the code base into two chunks that don’t really do what you want. I don’t know why you hate Rust so much to warrant posting anywhere the discussion might come up, but I would suggest if you truly cannot stand it that you use some of your non-complex software to filter out related keywords in your web browser.

                                                                        1. 4

                                                                          Agree with what you’ve wrote, but just to pick at a theme that’s bothering me on this thread…

                                                                          I don’t understand why you hate Rust so much but it seems as irrational as people’s love for it.

                                                                          This is obviously very subjective, and everything below is anecdotal, but I don’t agree with this equivalence.

                                                                          In my own experience, everyone I’ve met who “loves” or is at least excited about rust seems to feel so for pretty rational reasons: they find the tech interesting (borrow checking, safety, ML-inspired type system), or they enjoy the community (excellent documentation, lots of development, lots of online community). Or maybe it’s their first foray into open source, and they find that gratifying for a number of reasons. I’ve learned from some of these people, and appreciate the passion for what they’re doing. Not to say they don’t exist, but I haven’t really seen anyone “irrationally” enjoy rust - what would that mean? I’ve seen floating around a certain spiteful narrative of the rust developer as some sort of zealous online persona that engages in magical thinking around the things rust can do for them, but I haven’t really seen this type of less-than-critical advocacy any more for rust than I have seen for other technologies.

                                                                          On the other hand I’ve definitely seen solid critiques of rust in terms of certain algorithms being tricky to express within the constraints of the borrow checker, and I’ve also seen solid pushback against some of the guarantees that didn’t hold up in specific cases, and to me that all obviously falls well within the bounds of “rational”. But I do see a fair amount of emotionally charged language leveled against not just rust (i.e. “bullshit” above) but the rust community as well (“the mob”), and I don’t understand what that’s aiming to accomplish.

                                                                          1. 3

                                                                            I agree with you, and I apologize if it came across that I think rust lovers are irrational - I for one am a huge rust proselytizer. I intended for the irrationality I mentioned to be the perceived irrationality DD attributes to the rust community

                                                                            1. 2

                                                                              Definitely no apology needed, and to be clear I think the rust bashing was coming from elsewhere, I just felt like calling it to light on a less charged comment.

                                                                            2. 1

                                                                              I think the criticism isn’t so much that people are irrational in their fondness of Rust, but rather that there are some people who are overly zealous in their proselytizing, as well as a certain disdain for everyone who is not yet using Rust.

                                                                              Here’s an example comment from the HN thread on this:

                                                                              Another question is who wants to maintain four decades old GNU C soup? It was written at a different time, with different best practices.

                                                                              In some point someone will rewrite all GNU/UNIX user land in modern Rust or similar and save the day. Until this happens these kind of incidents will happen yearly.

                                                                              There are a lot of things to say about this comment, and it’s entirely false IMO, but it’s not exactly a nice comment, and why Rust? Why not Go? Or Python? Or Zig? Or something else.

                                                                              Here’s another one:

                                                                              Rust is modernized C. You are looking for something that already exists. If C programmers would be looking for tools to help catch bugs like this and a better culture of testing and accountability they would be using Rust.

                                                                              The disdain is palatable in this one, and “Rust is modernized C” really misses the mark IMO; Rust has a vastly different approach. You can consider this a good or bad thing, but it’s really not the only approach towards memory-safe programming languages.

                                                                              Of course this is not representative for the entire community; there are plenty of Rust people that I like and have considerably more nuanced views – which are also expressed in that HN thread – but these comments certainly are frequent enough to give a somewhat unpleasant taste.

                                                                            3. 2

                                                                              While I don’t approve of the deliberately inflammatory form of the comments, and don’t agree with the general statement that all complexity is eliminateable, I personally agree that, in this particular case, simplicity > Rust.

                                                                              As a thought experiment, world 1 uses sudo-rs as a default implementation of sudo, while world 2 uses 500 lines of C which is doas. I do think that world 2 would be generally more secure. Sure, it’ll have more segfaults, but fewer logical bugs.

                                                                              I also think that the vast majority of world 2 populace wouldn’t notice the absence of advanced sudo features. To be clear, the small fraction that needs those features would have to install sudo, and they’ll use the less tested implementation, so they will be less secure. But that would be more than offset by improved security of all the rest.

                                                                              Adding a feature to a program always has a cost for those who don’t use this feature. If the feature is obscure, it might be overall more beneficial to have a simple version which is used by the 90% of the people, and a complex for the rest 10%. The 10% would be significantly worse off in comparison to the unified program. The 90% would be slightly better off. But 90% >> 10%.

                                                                              1. 2

                                                                                Rust’s main value proposition is that it allows you to write more complex software that has fewer bugs

                                                                                I argue that it’s actually that it allows you to write fast software with fewer bugs. I’m not entirely convinced that Rust allows you to manage complexity better than, say, Common Lisp.

                                                                                That’s because we want software to do stuff, the less stuff it does the less useful it becomes

                                                                                Exactly. Software is written for people to use. (technically, only some software - other software (such as demoscenes) is written for the beauty of it, or the enjoyment of the programmer; but in this discussion we only care about the former)

                                                                                The ecosystem hasn’t actually become less complex when you do that

                                                                                Even worse - it becomes more complex. Now that you have two tools, you have two userbases, two websites, two source repositories, two APIs, two sets of file formats, two packages, and more. If the designs of the tools begin to differ substantially, you have significantly more ecosystem complexity.

                                                                                1. 2

                                                                                  You’re right about Rust value proposition, I should have added performance to that sentence. Or, I should have just said managed language, because as another commenter pointed out Rust is almost irrelevant to this whole conversation when it comes to preventing these type of CVEs

                                                                                2. 1

                                                                                  The other issue is that it is a huge violation of principle of least privilege. Those other features are fine, but do they really need to be running as root?

                                                                            4. 7

                                                                              Just to add to that: In addition to having already far too much complexity, it seems the sudo developers have a tendency to add even more features:

                                                                              Plugins, integrated log server, TLS support… none of that are things I’d want in a tool that should be simple and is installed as suid root.

                                                                              (Though I don’t think complexity vs. memory safety are necessarily opposed solutions. You could easily imagine a sudo-alike too that is written in rust and does not come with unnecessary complexity.)

                                                                              1. 4

                                                                                What’s wrong with EBNF and how is it related to security? I guess you think EBNF is something the user shouldn’t need to concern themselves with?

                                                                                1. 6

                                                                                  There’s nothing wrong with EBNF, but there is something wrong with relying on it to explain an end-user-facing domain-specific configuration file format for a single application. It speaks to the greater underlying complexity, which is the point I’m making here. Also, if you ever have to warn your users not to despair when reading your docs, you should probably course correct instead.

                                                                                  1. 2

                                                                                    Rewrite: The point that you made in your original comment is that sudo has too many features (disguising it as a point about complexity). The manpage snippet that you’re referring to has nothing to do with features - it’s a mix between (1) the manpage being written poorly and (2) a bad choice of configuration file format resulting in accidental complexity increase (with no additional features added).

                                                                                  2. 1

                                                                                    EBNF as a concept aside; the sudoers manpage is terrible.

                                                                                  3. 3

                                                                                    Hello, I am here to derail the Rust discussion before it gets started.

                                                                                    I am not sure what you are trying to say, let me guess with runaway complexity.

                                                                                    • UNIX is inherently insecure and it cannot be made secure by any means
                                                                                    • sudo is inherently insecure and it cannot be made secure by any means

                                                                                    Something else maybe?

                                                                                    1. 4

                                                                                      Technically I agree with both, though my arguments for the former are most decidedly off-topic.

                                                                                      1. 5

                                                                                        Taking Drew’s statement at face value: There’s about to be another protracted, pointless argument about rewriting things in rust, and he’d prefer to talk about something more practically useful?

                                                                                        1. 7

                                                                                          I don’t understand why you would care about preventing a protracted, pointless argument on the internet. Seems to me like trying to nail jello to a tree.

                                                                                      2. 3

                                                                                        This is a great opportunity to promote doas. I use it everywhere these days, and though I don’t consider myself any sort of Unix philosophy purist, it’s a good example of “do one thing well”. I’ll call out Ted Unangst for making great software. Another example is signify. Compared to other signing solutions, there is much less complexity, much less attack surface, and a far shallower learning curve.

                                                                                        I’m also a fan of tinyssh. It has almost no knobs to twiddle, making it hard to misconfigure. This is what I want in security-critical software.

                                                                                        Relevant link: Features Are Faults.

                                                                                        All of the above is orthogonal to choice of implementation language. You might have gotten a better response in the thread by praising doas and leaving iron oxide out of the discussion. ‘Tis better to draw flies with honey than with vinegar. Instead, you stirred up the hornets’ nest by preemptively attacking Rust.

                                                                                        PS. I’m a fan of your work, especially Sourcehut. I’m not starting from a place of hostility.

                                                                                        1. 3

                                                                                          If you want programs to be more secure, stable, and reliable, the key metric to address is complexity. Rewriting it in Rust is not the main concern.

                                                                                          Why can’t we have the best of both worlds? Essentially a program copying the simplicity of doas, but written in Rust.

                                                                                          1. 2

                                                                                            Note that both sudo and doas originated in OpenBSD. :)

                                                                                            1. 9

                                                                                              Got a source for the former? I’m pretty sure sudo well pre-dates OpenBSD.

                                                                                              Sudo was first conceived and implemented by Bob Coggeshall and Cliff Spencer around 1980 at the Department of Computer Science at SUNY/Buffalo. It ran on a VAX-11/750 running 4.1BSD. An updated version, credited to Phil Betchel, Cliff Spencer, Gretchen Phillips, John LoVerso and Don Gworek, was posted to the net.sources Usenet newsgroup in December of 1985.

                                                                                              The current maintainer is also an OpenBSD contributor, but he started maintaining sudo in the early 90s, before OpenBSD forked from NetBSD. I don’t know when he started contributing to OpenBSD.

                                                                                              So I don’t think it’s fair to say that sudo originated in OpenBSD :)

                                                                                              1. 1

                                                                                                Ah, looks like I was incorrect. I misinterpreted OpenBSD’s innovations page. Thanks for the clarification!

                                                                                          1. 17

                                                                                            Distributed builds is why I made lazyssh. It’s a little basic right now, but I’m not sure how much I’ll improve on what’s there. It does just about everything I want.

                                                                                            I use it at $work to start beefy EC2 instances for building our NixOS deployments, and also for ARM builds that can run on the newer generation EC2 instance types. It sits in between Nix and the actual machine as an SSH jump host.

                                                                                            I used to also use it with VirtualBox locally, but am trying to cut down on non-sandboxed apps on a clean macOS install.

                                                                                            1. 2

                                                                                              lazyssh looks sick as! I’d been thinking of writing something similar, and it’s good to know that I don’t have to. Hopefully you get the robustness dialed in.

                                                                                            1. 1

                                                                                              I built a small internal tool at $work to automate our NixOS deployments that integrates with Terraform. Still want to push to open source it, but simply haven’t had time to work on it lately.

                                                                                              The setup we have is that secrets are defined in Nix lang but not part of any derivation. (Like nixops, I believe.) We then make them available in /run/secrets during activation only. Regular activation scripts are used to install them wherever needed.

                                                                                              (Minor obstacle is that we always need to check existence, because activation runs are not always initiated by our tool.)

                                                                                              1. 2

                                                                                                We (not royal we, but my team) also built something that helps out deployments and integrates with Terraform, but didn’t have time to open-source it. It’s mostly based on, but we had to rework the whole process of sending secrets to hosts (Terraform is not great when copying, as it leaves files around). For us, the secrets are defined in Terraform, and stored in the terraform state, but during the deployment they are stored on the remote machine, outside of nix store. The downside is that we have to redeploy on every secret change (which is simple), and ensure the services reload (which is not so simple).

                                                                                                1. 1

                                                                                                  Our setup is different from the Tweag approach in that we have a server/agent setup. A Terraform provider uploads a flake to the server which builds it, instead of building locally on the machine running Terraform. We pass secrets and other variables from Terraform to Nix by injecting a vars.json into the flake as we upload. Once the build completes, an agent (running on the target machine) downloads and activates the configuration.

                                                                                                  Service reload is still an issue yes, because the Nix activation doesn’t notice any change if just secrets were updated. I still have to tackle it, but was thinking of adding a hash of (a subset of) secrets in, for example, the systemd unit as a comment. Just so that the file (and derivation) changes, and Nix understands it needs to give it a restart.

                                                                                                  1. 2

                                                                                                    We keep a secret per file/service and then this helps a bit:

                                                                                                    systemd.paths = {
                                                                                                      # we rely on this to detect changes to keys and
                                                                                                      # automatically trigger the restart of the service
                                                                                                      hydra-server-watcher = {
                                                                                                        wantedBy = [ "" ];
                                                                                                        pathConfig = {
                                                                                                          PathChanged = [ "/var/keys/admin_password" ];
                                                                                           = {
                                                                                                      hydra-server-watcher = {
                                                                                                        description = "Restart hydra-server on credentials change";
                                                                                                        wantedBy = [ "" ];
                                                                                                        after = [ "" ];
                                                                                                        serviceConfig = {
                                                                                                          Type = "oneshot";
                                                                                                          ExecStart = "${pkgs.systemd}/bin/systemctl restart hydra-server.service";

                                                                                                    I forgot why we couldn’t rely just on systemd.paths. This works reasonably well, however we sometimes run into nginx not reloading on the latest Let’s Encrypt certificates (they are also pushed from the deployer, via Terraform, as our machines don’t have access to the internet, so we can’t use http challenge).

                                                                                              1. 16

                                                                                                I’m a big fan/user of Rust I think this is a fairly accurate assessment. There is a lot of interest in building web applications with Rust and I’ve built a couple myself but there’s still a ways to go. Coming from Rails, some of things I’ve missed are:

                                                                                                • built in csrf protection on forms
                                                                                                • an equivalent to devise
                                                                                                • sophisticated form handling (array attributes, nested attributes)
                                                                                                • form validation and presentation of errors back to the user, preserving form input

                                                                                                Actually… maybe it’s mostly form handling that’s immature. Seems many people build back ends in Rust for JS SPAs where this isn’t such an issue but I’m not interested in building that type of web application.

                                                                                                On the other hand some things are delightfully easy:

                                                                                                • JSON responses/request parsing — often a single line thanks to serde
                                                                                                • Microsecond response times, low memory use, instant start up time.
                                                                                                1. 5

                                                                                                  Serde is so amazing. It’s actually something I miss in higher-level language frameworks, but that’s because it ties in with the type system in Rust to also do validation.

                                                                                                  In general, the type system in Rust takes a way a great deal of ‘what-ifs’ about corner cases in code. I feel like we often cut corners when working in higher-level languages, causing confusing error messages to the user or straight up security issues.

                                                                                                  I guess it’s a double edged sword that also causes some of the difficulty highlighted by OP.

                                                                                                  1. 3

                                                                                                    Well, and async is complicated if you use it.

                                                                                                    1. 5

                                                                                                      Yeah; I’d like to underscore this sentiment. This has been a massive pain point for me lately.

                                                                                                      I really appreciate the Rust approach to async (library based). However, I still find myself frustrated by dependency lock-in and lack of up to date documentation. If I want to leverage the larger async ecosystem, in almost every case, I need to commit to tokio and ensure that it’s version is consistent across any other dependency that leverages tokio under the hood. Eventually, I realized that I was spending more time juggling dependencies and troubleshooting async runtime shenanigans than I was writing business logic. Much of the async ecosystem still has incomplete documentation and requires spelunking source code. This is good and bad. It’s a sign that the ecosystem is growing rapidly, but unfortunately it quickly makes google/stackoverflow queries out of date. I’ll admit that it’s possible that some of the pain I’ve experienced already has a solution in the form of a compat style lib.

                                                                                                      I generally try to avoid being to critical in comments as it’s easy to just leave low-energy feedback/complaints. I know a lot of hard work and thought has been put into Rust’s async effort. However, I’m concerned that if my pain tolerance is being hit, what kind of impression is left on users with less experience?

                                                                                                      In spite of all of this, I’m still optimistic that Rust is headed in the right direction.

                                                                                                    2. 1

                                                                                                      an equivalent to devise

                                                                                                      The original author of Devise went on to create Elixir, and he has decided to take a different approach to authentication this time around. So a strict equivalent to Devise might not be the best thing.

                                                                                                      1. 5

                                                                                                        By equivalent I meant more generally: a solution to common authentication patterns in web applications so you don’t have to roll your own.

                                                                                                    1. 5

                                                                                                      Aren’t SEO and Lighthouse scores things that change all the time? So, that requires maintenance.

                                                                                                      I wonder if we’ll ever go through another transition like ‘mobile-first’ that’d also break your CSS on new devices.

                                                                                                      OP also compares and GitHub Pages, two hosted services, while implying his page on GitHub will never have security vulnerabilities. Sure, if your host is outside of your security scope, then your ‘ stack’ will also never have security vulnerabilities.

                                                                                                      Ok, that last one was probably super nitpicky. And I do agree and appreciate simple fast pages. :-)

                                                                                                      1. 3

                                                                                                        To be fair, “a transition that breaks your CSS” is a lot easier to handle when you only have one CSS file :-) I think this is probably a point in favor of the OP’s approach and against something like WordPress, where I imagine that many popular themes would be updated quickly but that a long tail of other themes would be updated much more slowly.

                                                                                                        1. 0

                                                                                                          SEO and Lighthouse scores are to be ignored as a matter of principle.

                                                                                                        1. 3

                                                                                                          It seems nice ; but that looks like a lot of work to generate 10 lines of systemd config. or even just one line in a cron job?

                                                                                                          1. 3

                                                                                                            It’s about declaring what your system config / app deployment should look like in a single place, rather than changing details across a running system. You can combine that systemd unit and cronjob in a single NixOS module that you keep in a git repo, and that’s just a very small example. More complicated applications can also include virtual host config, build steps, etc. in a single file, if the developer feels they logically belong together.

                                                                                                            Some of this is comparable to what tools like Ansible achieve. What if you have to reinstall the machine? Or what if you want to share some configuration across machines? You don’t want to figure out how to setup your server / app all over again, especially if the situation is unexpected and you’re pressed for time.

                                                                                                            NixOS goes a bit further than Ansible et al by not describing steps to apply to an Ubuntu system (for example), but by being its own Linux distribution built entirely with Nix. (Benefits of that are a separate discussion, I think.)

                                                                                                            1. 2

                                                                                                              That was kind of my point. We now need a custom Linux distribution with its own (new) programming language to achieve very simple things. Well, at least it’s not YAML again.

                                                                                                              1. 4

                                                                                                                The language and distribution are about 17 years old, so I guess new is relative here.

                                                                                                                1. 2

                                                                                                                  A matter of perspective, maybe? I’ve tried often to get comfortable with Debian packaging for my own (also work) applications and smaller tools, but couldn’t. Instead, it’s always just ‘treat the OS like a black box’ and deploy to /opt or similar. NixOS is a lot more approachable to me, despite the learning curve. (I guess a different learning curve.)

                                                                                                              2. 3

                                                                                                                It’s not just 10 lines of systemd config. It’s also putting that config in the right place and implicitly activating it so you can’t mess it up. One line of cronjob doesn’t give you logs for that cronjob when it fails. The basic process here can also be adapted to other things like backup scripts.

                                                                                                                This blogpost was cherry-picked from this config:, which uses Nix functions to dynamically create discord webhook timers so I can add an arbitrary number of them in the future.

                                                                                                                For a more complicated example, see here:, this handles a service called mi and exposes it at I could “just write the systemd units by hand”, but that doesn’t handle pushing the units and scripts to the machine and making sure they are enabled. This allows me to rest assured that I can trivially move the config to other machines if I need to, such as if I get a new home server. Not to mention automatically building and installing all the services on the machine and then making sure the systemd units point to the correct binaries.

                                                                                                                Sure, I can write 10 lines of systemd config today and I will be fine. However tomorrow it may end up not working out when circumstances and facts change.

                                                                                                                1. 1

                                                                                                                  One line of cronjob doesn’t give you logs for that cronjob when it fails.

                                                                                                                  Well, I used to get emails from failed cron jobs, and systemd logs are still a thing if you’re on Linux.

                                                                                                                  I completely understand the appeal of having the state/config integrated in one repository, but it looks like we only reach as far as tools like Ansible/Chef/Salt do for now. I’ve only seen a few bits of nix config here and there so far, and there’s probably a bigger picture that makes it all very exciting, but I guess I’d need to dig myself into that hole to find out if I like digging.

                                                                                                                  1. 4

                                                                                                                    The key difference is Ansible, Chef, and Salt are all still saying what you want to do, not what you want to end up with. What I mean by this is with each of these tools:

                                                                                                                    1. start with a new system
                                                                                                                    2. define a service you want running
                                                                                                                    3. deploy
                                                                                                                    4. delete that service from your configuration repository
                                                                                                                    5. deploy

                                                                                                                    Chef and friends will just stop managing that service, but the service will still exist. NixOS, though, won’t have that service anymore.

                                                                                                                    1. 1

                                                                                                                      The key difference is Ansible, Chef, and Salt are all still saying what you want to do, not what you want to end up with.

                                                                                                                      That should be tag line for nix in general. For whatever reason, that really resonates with me.

                                                                                                                      Chef and friends will just stop managing that service, but the service will still exist. NixOS, though, won’t have that service anymore.

                                                                                                                      I don’t want to be pedantic, but if not having the service anymore is your goal, that’s still totally doable with chef and friends.

                                                                                                                      1. 1

                                                                                                                        Yes it is possible, but you have to make it your goal. Again, you’re so used to having to think about what you want to do, not just what you want. Nix lets you skip past that and just write down what you want.

                                                                                                              1. 6

                                                                                                                Maybe this is new, but turns out, NixOS has a startAt attribute on services to automate creating that timer. I only just found out and have some code to cleanup myself. :-)

                                                                                                                There’s also environment for setting vars, instead of in the shell script, which could be helpful if you’re worried about escaping values.

                                                                                                                I’m not sure if the readFile has any benefit of protecting the secret here. The contents will still be part of the unit definition generated by Nix and present in the (world-readable) store. I think it really needs to be read run-time somehow?

                                                                                                                1. 3

                                                                                                                  I’m not sure if the readFile has any benefit of protecting the secret here. The contents will still be part of the unit definition generated by Nix and present in the (world-readable) store. I think it really needs to be read run-time somehow?

                                                                                                                  I should have explained how to do it with nixops and its keys, however a webhook leak isn’t really that bad because it is post only and easily replaced. I’m working on a nixops tutorial at the moment though.

                                                                                                                  1. 2

                                                                                                                    Today I learned! Thanks. That will make things much easier.

                                                                                                                  1. 11

                                                                                                                    I wonder what they’re going to do for Mac Pro class of hardware.

                                                                                                                    Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic, especially that one-size-fits-all chip isn’t going to make sense for all kinds of pro uses.

                                                                                                                    1. 7

                                                                                                                      It’s going to be interesting to see what they do (if anything at all) with a 250W TDP or so. Like, 64? 128? 196 cores? I’m also interested in seeing how they scale their GPU cores up.

                                                                                                                      1. 2

                                                                                                                        There’s NUMA, though Darwin doesn’t support that right now I think.

                                                                                                                        1. 2

                                                                                                                          Darwin does support the trashcan Mac Pros, right? They have two CPUs, and that’s a bona fide NUMA system.

                                                                                                                          1. 3

                                                                                                                            Trashcan Mac Pros (MacPro6,1) are single CPU - it’s the earlier “cheesegrater” (MacPro1,1-5,1) that are dual CPU. I do believe they are NUMA - similar-era x86 servers certainly are.

                                                                                                                            1. 1

                                                                                                                              Ah, you’re right, sorry for the confusion.

                                                                                                                        2. 2

                                                                                                                          Trying to fit more RAM, more CPU cores, and a beefier GPU all into one package doesn’t seem realistic

                                                                                                                          I have heard this repeatedly from various people – but I don’t have any idea why this would be the case. Is there an inherent limit in SOC package size?

                                                                                                                          1. 1

                                                                                                                            I’d assume they’ll just support non-integrated RAM - as there will be space and cooling available.

                                                                                                                          1. 2

                                                                                                                            This is a cool idea… I like that if I were use it in a shell script, all the parameters would be “in line” in the script.

                                                                                                                            And you wouldn’t have to concurrently start a VM and then ssh into it, which gives you a race to resolve.

                                                                                                                            I don’t use AWS, and rarely VirtualBox, but I will keep it in mind …

                                                                                                                            1. 1

                                                                                                                              What do you use instead?

                                                                                                                              Someone on HN said they were going to take a shot at GCP support this weekend.

                                                                                                                            1. 3

                                                                                                                              Author here. This is very much a ‘release early’ type thing. I’ve mostly been testing with it, and not yet seriously using it.

                                                                                                                              Besides the cases mentioned in the readme, I also want to use this to automate on-demand Nix builders, because Nix only understands SSH for remote builders. Locally I run Mac, and I sometimes need to do a Nix build for Linux. Similarly at work, we have a build server that I want to do ARM builds on, so I can eventually deploy on t4g.* EC2 instances.

                                                                                                                              Any way, hope this is useful to others. 🙂

                                                                                                                              1. 2

                                                                                                                                Any plans to add generic terraform configuration? Maybe with something like

                                                                                                                                1. 1

                                                                                                                                  Oh! I did have that as a big ‘maybe’ on the todo list, because I wasn’t sure it was possible. But I didn’t know about CDK. Will have to take a look, thanks for the pointer!

                                                                                                                                  1. 1

                                                                                                                                    Nice! Opening it up to all the providers that terraform offers would be amazing.

                                                                                                                              1. 11

                                                                                                                                Hetzner is awesome. I first used Digital Ocean too, but after getting more familiar with VPSs, I found Hetzner and used them since then. I think I started using Hetzner Cloud soon after they left beta status, which was already some years since ago. Have been very happy with them since then! :smile:

                                                                                                                                1. 3

                                                                                                                                  I wish hetzner has other locations, including US and Asia. Digital oceans allows more locations. I guess the direct competitor here is scaleway, they both have limited selection of locations, and seems hetnzer having lower price now.

                                                                                                                                  1. 18

                                                                                                                                    I have two cloud servers at Hetzner. To be very very honest, a cloud provider with no presence in the US at all is really attractive for me. It’s not that I would outright cancel if they did create presence in the US, but I would become more wary of public response and opinion on the company.

                                                                                                                                    It’s a small matter of principle, I guess. I care about Europe, and feel like it’s threatened sometimes.

                                                                                                                                    At the same time, if you want to run a company with worldwide presence, I totally understand Hetzner is less attractive.

                                                                                                                                    1. 3

                                                                                                                                      Yes, absolutely. Hetzner even has a data center in Finland, which is kinda rare.

                                                                                                                                      1. 1

                                                                                                                                        Another problem with a potential DC is that the US government can ask them to give over your data. That may not be an issue by itself, in theory, the government is working for the good of its citizens. But some governments, US included, are particularly bad at abusing this data to target groups of people to perform semi-legal or outright illegal operations. Additionally, once I have that data in a US datacenter, I can no longer guarantee my users that their data is fully private, as is the EU citizens’ right.

                                                                                                                                      2. 4

                                                                                                                                        vultr is probably a more direct competitor to DO in the vps space.

                                                                                                                                        1. 2

                                                                                                                                          Yep, with even lower price.

                                                                                                                                          1. 1

                                                                                                                                            Depends what you mean by lower price, Vultr definitely has lower priced options but seems to be more expensive for the same specs on Hetzner

                                                                                                                                        2. 2

                                                                                                                                          Seen as it was mentioned here, Scaleway seems to be the only cloud provider I’ve been able to find with ARM hosts available at a reasonable price. Yes, AWS has ARM, but they’re for literally 2.5x the price.

                                                                                                                                          I’ve been pretty happy with them overall, despite being on the other side of the ocean.

                                                                                                                                          1. 1

                                                                                                                                            I have a single VM currently in DO because I use it to run a couple of services for my family (in the Caribbean) and also for when I am traveling home myself. Hetzner is great but the latency of going to Europe and back for something like a VPN adds up very quickly.

                                                                                                                                            1. 1

                                                                                                                                              Contabo (also in Germany, in fact right here in Munich) has even better pricing, depending on what server you want.

                                                                                                                                              1. 2

                                                                                                                                                Thanks, they seems offering bigger instances at similar pricing with hetzner. But their website looks really outdated, and makes feels like it’s a scam site :(

                                                                                                                                                1. 1

                                                                                                                                                  I agree, looks like crap :) I even had to wait for them to “activate” my account or something, but I did get the VPS access date l data a few hours later. But it’s legit and it works so far at least.

                                                                                                                                                  As for the scam, they’re in Germany so that would make it pretty hard for them if they actually cheated, I think. So I don’t actually know what their game is. Maybe they oversell or something, which I don’t notice because I’m not using much of the resources.

                                                                                                                                                2. 1

                                                                                                                                                  To bad they don’t offer any API that I can see. Love using Terraform for IaC.

                                                                                                                                                  1. 2

                                                                                                                                                    It seems like they haven’t even automated setup. There are probably manual steps involved in setup. But an api is an often requested feature. Let’s see how quick they can deliver it.