1.  

    For completeness’ sake I also recommend to take a look at kubenix.

    Kubenix is written in Nix and uses the modules system to add a layer of schema validation. It generates the schema from the Kubernetes swagger spec so you know it will always be complete and up to date.

    One advantage of using Nix is that you can use the same language to generate both the container images and the kubernetes configuration. Which allows to do things like run end-to-end tests.

    1.  

      Do you know of any guides on how to get started with Nix to define container images?

      1.  

        How familiar are you with nix already?

        There are some examples in the manual but that might not be enough: https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools If you learn best by example, there are a few example there: https://github.com/nix-community/docker-nixpkgs

        And to learn about Nix itself, I heard that this is a pretty good source: https://github.com/tazjin/nix-1p

    1. 12

      This should be taken with a huge grain of salt. I dislike the conspiratorial tone of his “the real reason for systemd” linked article, and really think systemd as a whole has really deeply marked people negatively which is weird because you can always start your own distro without it, there are more than enough haters out there to make it happen.

      1. 10

        TA complains that Linux has fragmentation and then complains when the systemd project tries to unify userspace. :+1:

        1. 10

          The number of Linux users that intensely hate systemd is probably pretty small, given that people are not moving to other distributions en masse (even if you can get Debian without systemd, which should attract Debian, Ubuntu, Mint, etc. folks). The drama of a vocal minority is used by some (but definitely not all) BSD folks to bash Linux. FreeBSD’s Benno Rice had a very nice, balanced talk about systemd:

          https://www.youtube.com/watch?v=o_AIw9bGogo

          1. 1

            I think there are plenty of people who don’t like systemd, but other concerns/aspects in their distro choice take priority over “I don’t like systemd”. For example, I have some friends who use Arch because they like the docs and pacman, but don’t like systemd.

            In many ways, systemd is an implementation detail many people don’t care about, although personally, I think that if attempting to fix Spotify can crash your system (had to restart dbus: will crash logind requiring reboot) or fixing your nfsd can cause hearing damage, something is very wrong…

            1. 1

              Same here. I don’t hate it. I would love to not have it, but overall the problems are minor (for me personally it’s still a net negative compared to sysvinit) - but I’ll stick to Debian (and sometimes Ubuntu) because it’s not a factor that drives me off my distro of choice.

        1. 19

          Oh my god, this article is so bad. How can you win people over by vomiting all over the other camp? Maybe there are some technical arguments in there but I couldn’t bring myself to read this until the end.

          1. 3

            My main goal for this week is to finish this PR: https://github.com/direnv/direnv/pull/555

            This is a long-standing issue with direnv which prevents the user from aborting long-running .envrc. After 5 seconds, direnv says: “… is taking a while to execute. Use CTRL-C to give up.”. If the user hits Ctrl-C, direnv aborts, the next prompt shows up and direnv gets executed again, restarting the whole build process.

            I am this close to finish the PR but it needs a last push as the mechanism is quite complex.

            1. 1

              10 years of support is fantastic. RedHat is 10 years, Ubuntu is 5 years and macOS is roughly 3 years with no commitment. macOS system upgrades are free but they tend to support devices only for 5 years.

              1. 1

                macOS is roughly 3 years with no commitment

                It’s worse than that, because in the case of macOS the OS is tied to the hardware. So you’ll be upgrading your perfectly good devices, too, at some point. For no other reason than Apple wants you to.

              1. 6

                According to zmap it takes 45min to scan all of IPv4 on a Gigabit connection. That could be a slow but interesting way to reliably bootstrap the network in case of an attack.

                1. 1

                  I like the idea.

                  The 45 mins scan advertised on the homepage is probably the result of a TCP SYN scan though. You’ll probably need to add an application layer scanner on top of that (zgrab?). Not sure how this will affect the overall latency of the scan :/

                1. 3

                  I like to think "$@" as keyword because it only works with that exact incantation. For example "-v $@" won’t work as it will be expanded to a single argument instead of a list of arguments.

                  $* is not really useful in itself but can be used to indicate intent that no argument expansion should happen.

                  For example:

                  run() {
                    echo "running $*"
                    "$@"
                  }
                  
                  run ls -la
                  

                  NOTE: this also works with bash arrays

                  args=(a "b b" c)
                  run "${args[@]}"
                  
                  1. 1

                    A counter-point to the bernstein-chaining style is "${*@Q}"

                    Sometimes some programs want to get the command as a string and this incantation allow you to escape the array properly back into bash.

                    Eg:

                    > echo "${args[*]@Q}"
                    'a' 'b b' 'c'
                    
                    1. 1

                      echo “${args[*]@Q}”

                      It seems to work the same way with @ for me. As rule I never use anything like $* – it’s just easier to remember that way.

                      bash-4.4$ args=(a "b b" c)
                      bash-4.4$ echo "${args[*]@Q}"
                      'a' 'b b' 'c'
                      bash-4.4$ echo "${args[@]@Q}"                                                                                         
                      'a' 'b b' 'c'
                      

                      (note: it doesn’t work at all in bash 4.3)

                      1. 1

                        yeah makes sense. those poor macOS users with their bash 3.0 are missing out on the good things.

                  1. 3

                    Some quick numbers for people not familiar with the world of programming languages. Around 10,000 computer languages have been released in history (most of them in the past 70 years). About 50-100 of those have more than a million users worldwide and the names of some of them may be familiar to even non-programmers such as Java, Javascript, Python, HTML or Excel.

                    Where is the dataset for this claim! The irony :p

                    1. 2

                      It depends how you learn best. Go is already fairly high-level.

                      Here are a few things I would play with:

                      • Install Wireshark. This is a tool that allows you to record and analyze your network traffic. There are a TON of options and acronyms so don’t be afraid. Just keep it around and play with it while doing the other stuff. Try the “Follow TCP stream” option. This is more to get a sense of what is happening and learn by osmosis.
                      • Find some references on Berkley sockets. This is the API that is used for networking on Linux, macOS and all the other UNIX operating systems. It’s important to understand what bind(), listen(), connect() and accept() operations do.
                      • Implement a small server and client in Go with the higher-level version: https://golang.org/pkg/net/ .
                      • Then read their implementation. Maybe try to use the underlying syscalls directly: https://golang.org/pkg/syscall/

                      That should be enough for day-to-day programming. I also encourage you to read up on how DHCP, ICMP, TCP/IP, DNS resolution works. And higher-level protocols such as HTTP.

                      1. 1

                        Thank you, do you have any ideas for little projects I could do after I am done learning the basics and have played around with the implementations?

                        1. 2

                          You could write a little file-sending tool.

                          You need a client mode that opens the file and tries to forward it to a server. For example the usage could be filecli send <filename> <targethost[:port]> Eg: filecli send /etc/password myserver:8848.

                          You need a server that accepts new client connections and puts the file in a per-determined location. For example the usage could be filecli recv <bind-addr:[port]> <directory>. Eg: filecli recv 0.0.0.0:8848 ./Downloads.

                          At first you just want to get the raw data for each connection and store them into incremental names. Eg: ~/Download/file-1 ~/Downloads/file2. Each connection can only send one file and the file is closed at the same time as the connection.

                          Once you have that, think about what issues the implementation has and how you could augment it, maybe by creating some sort of exchange protocol yourself. What happens if the connection gets interrupted during the transfer.

                      1. 2

                        And since this is for personal projects, for issue tracking, just use a flat file in the repo (TODO.md or something like that). Keep the file small, if it’s bigger than one page then probably the project is too big. “Closing” an issue is just removing the entry with the commit that fixes it, no need to add a #someid reference in the commit.

                        This actually works surprisingly well for small (1-3) teams as well.

                        1. 3

                          The todo.txt format would fit well here.

                          1. 1

                            Oh wow, that sounds like a good idea. I’m working on a personal git server project as well and I like your idea.

                          1. 12

                            But, what do you gain from running this on k8s? It doesn’t seem to be less administration, maybe a bit more complex to setup and keep up to date. A vps provider that offers redundancy, when a hardware node fails boot the vps on another node via shared storage, would offer the same in this case, since there is no auto scaling or clustering? Or do I miss something?

                            1. 4

                              The main thing I get is that it’s slightly easier to test IRC bots on my kubernetes cluster. I just have them connect to ircd.svc.ircd.cluster.local:6667. Otherwise, there’s not really any point in this other than to prove to myself that k8s is generic enough to host a git server, my web apps, discord/IRC bots and an IRC server.

                              I can also update the config by updating 02_secrets.yml and rehashing the ircd. The big thing for me is that I don’t have to write the code that updates the configuration on the disk, it’s just there for free.

                              In theory I could also make this support autoscaling, but I haven’t dug deep enough into ngircd to find out if that’s possible or not.

                              Altogether, yes this is kind of dumb, but it works. It was also a lot easier to set up than I thought it would be. The stuff I learned setting this up will become invaluable when I set up a gopher interface for my website (mostly for stripping the PROXY protocol header).

                              1. 2

                                Nice write-up!

                                A few notes on the K8s config:

                                For deployments where you want exactly-one instance, using a StatefulSet is better than a Deployment. Deployments creates the new ReplicaSet and pod before shutting-down the old one, which could be confusing to users. Better to be down completely.

                                WEBIRC_PASSWORD could be using loading the value from a secret:

                                - name: WEBIRC_PASSWORD
                                  valueFrom:
                                    secretKeyRef:
                                      name: config
                                      key: webirc_password
                                
                              2. 4

                                But, what do you gain from running this on k8s?

                                Experience. The author runs a service on a platform (k8s) that guarantees fast recovery if a node, or the service, fails for some reason. At the same time they service, hopefully, a large amount of users and therefore they can see how a service behaves under stress, load, etc and even if the service fails, it is not the most critical thing (unless you sell IRC services).

                                So it is a nice exercise.

                              1. 1

                                I wish the article had at least included the kernel versions affected. Or even better pointed to the kernel commit that fixed the issue. Right now all we know is that something broke performance and how to fix it for Ubuntu 16.04.

                                1. 1

                                  Did you follow the link to the kernel issue? That has the patch and you can find it in your favorite vendor’s kernel change log. https://lkml.org/lkml/2019/5/17/581

                                1. 2

                                  I’m running a couple of projects on one VPS.

                                  Current status:

                                  • running Guix System
                                  • redeploying involves
                                    1. run git pull in the local checkout of any project I want to update
                                    2. run guix system reconfigure path/to/config.scm
                                    3. run sudo herd restart ... for any services that changed
                                  • downsides:
                                    • Guix System itself is unstable
                                    • I’m compiling everything on an underpowered VPS
                                    • even besides compiling, guix tooling is very slow (e.g. a no-op guix system reconfigure)
                                    • building from local checkouts is a bit messy and unprincipled, I might accidentally deploy local modifications
                                    • rollbacks are impossible because guix system switch-generation requires a reboot
                                    • having to restart services manually is a pain and error-prone Some of these could be addressed with a bit of work, e.g. I believe I could offload the compilation (which would also force me to deal with the local checkouts).

                                  Previous status:

                                  • running Debian stable
                                  • redeploying involved:
                                    • build locally on macos for things I could cross compile (go projects, javascript), then rsync over
                                    • build using CI (travis) for others (haskell projects), then wget
                                    • either tell systemd to reload/restart the configured user-level services, or connect to tmux and switch to relevant window, interrupt, arrow-up, enter
                                  • downsides:
                                    • outdated dependencies, meaning manual installation of some daemons (postgres, postgrest, …)
                                    • similarly, outdated dependencies made it hard to do any one-off development on the server
                                    • I tended to make a mess of deploying javascript things with rsync

                                  I’m a bit happier with the current situation, but. Probably now that I’ve learned Guix, getting into Nix would be more feasible than before. Perhaps Guix on top of NixOS would offer a reasonable migration path; besides slowness Guix is mostly fine, it’s Guix System that I have most issues with.

                                  One thing I don’t know how to solve nicely yet (in any setting, but particularly with Guix/Nix): How to deal with versioning / cache-busting of static web files. It seems that the right thing to do would be to have multiple versions of resources served simultaneously. Perhaps there’s a way to serve up the last couple generations?

                                  1. 2

                                    I’m compiling everything on an underpowered VPS

                                    It’s probably also possible with guix. On NixOS it’s fairly simple to build a system configuration on one machine and then ship the results to the target host over SSH. Assuming they both run the same kernel and arch: nixos-rebuild -I nixos-config=./target-configuration.nix --target-host mytargethostname switch. It’s also possible to provide a --build-host other-machine flag if you need a build machine.

                                    One thing I don’t know how to solve nicely yet (in any setting, but particularly with Guix/Nix): How to deal with versioning / cache-busting of static web files. It seems that the right thing to do would be to have multiple versions of resources served simultaneously. Perhaps there’s a way to serve up the last couple generations?

                                    That would be possible if the html pages point to /(guix|nix)/store entries for the static assets, and that folder was served by the webserver. Then all the css, js and images would still be available until a garbage-collection is run on the system.

                                    1. 1

                                      Yes, I believe there’s ways to compile remotely with Guix, too. I haven’t tried to run the guix tools on macos though, and I doubt it would be able to cross-compile, so this would required setting up a VM, which is also a bit of a pain. The way to go there would probably be to build with some CI service, e.g. like nix with cachix.

                                      Somehow serving the whole store sounds like a terrible idea, but thanks for the suggestion!

                                      1. 1

                                        Somehow serving the whole store sounds like a terrible idea, but thanks for the suggestion!

                                        Haha yes, don’t put any credentials in your nix code if you do that!

                                  1. 2

                                    I don’t know if this is much of a risk since the owner of a repo is typically already known by looking at the commit history.

                                    Perhaps if the public key is registered to an identity with more info than is displayed on a GitHub profile, it might leak some. But I can’t imagine a situation where someone signs GitHub commits and repos and doesn’t use a real name on their profile, but uses a real name for their public key.

                                    1. 2

                                      I can imagine it: people have blown their own cover in even dumber ways. ;)

                                      But the point remains, if you fail to consistently keep your secret identity completely separate from your other identities, whose fault is it?

                                      1. 2

                                        The thread model is a bit different.

                                        Attacker downloads all GitHub account -> public keys. With that in hand they can start scanning the Internet and test those keys against other hosts.

                                        The attack in itself doesn’t grant the the attacker access to the target hosts but it given them a bit of information that was unexpected. With that in hand they can start deciding which host looks interesting and then target the attached GitHub accounts.

                                      1. 2

                                        I don’t consider this a real risk in my book.

                                        1. People do a decent job of keeping the private key, private. So while someone might know that you could use a particular key, they don’t have access to it. Not very useful knowledge.
                                        2. Most security advice ensures you do not expose SSH to the public, or have a separate host/FQDN away from the obvious or very public hosts.

                                        This is useful as a bit of enumeration, but doesn’t seem that worrisome.

                                        1. 3

                                          Most security advice ensures you do not expose SSH to the public

                                          What would be the more secure entry point to your system? Some VPN? I consider ssh, only keyfile login allowed, no root login, to be a fairly good and secure entry point to my home network.

                                          1. 1

                                            Agreed, VPN is an encrypted tunnel, just like SSH. It’s not inherently more secure than SSH.

                                          2. 1
                                            1. Likely that few people are lucky enough to have a github/gitlab account name that matches their login id making the username effectively a salt. This could have been a problem if the comment was retained on the keys served up though.

                                            Enumeration can be worrying for some, similar to how you can use SSL certs to discover supersets of a group of machines you are trying to tie together and identify ownership of.

                                          1. 4

                                            My first thought was that maybe the extension was breaking Google’s terms of service or violated copyright by bundling Microsoft’s library, but it doesn’t seem so.

                                            I don’t understand Mozilla’s actions here. Why so zealously enforce the policy? The extension is like a single-purpose Tampermonkey, and it only loads official Google/Microsoft libraries via <script>. It doesn’t even load dynamically any code controlled by the extension’s author, so there’s no risk of abuse here.

                                            1. 3

                                              If you keep up with other news about Firefox, this shouldn’t be surprising – they’ve been tightening up the add-ons ecosystem, both AMO and “side-loaded”, for a while. And while it’s tempting to make exceptions for a “good” add-on like this one, personally I think that A) it doesn’t scale as a policy, and B) it’s still too much of a risk given how many add-ons/extensions for various browsers have started out “good” and then ended up in the hands of people who did not-so-good things with the large installed base of trusting users they inherited.

                                              Also, Mozilla is known to be working on in-browser translation features, though with a different approach – they want to do it fully client-side without sending everything you translate through Google/Microsoft.

                                              1. 11

                                                Side-loading is the escape hatch that puts the user in control. It’s true that Mozilla has to fight toxic extension and it’s a big problem for them. This is the curation that you get on addons.mozilla.org. But side-loading is crucial too, it allows disagreement and to cater for minorities. Otherwise Firefox is just yet another walled-garden product like iPhone and Google Chrome.

                                              2. 1

                                                It’s basically impossible to ‘use judgement’ at scale while remaining cost-competitive with the other browsers.

                                              1. 3

                                                In other words, code takes the shape of how it’s being exercised most.

                                                With a CD pipeline in place, it makes sure that the code will handle things like taking configuration the right way, handle multiple hostnames, …

                                                Without CD, developer will implement things in one way, and it will have to be revisited later so that it deploys in production. Not only it creates more work, but it also increases the number of context switches the developer has to go through. Or it delegates the patching work to the ops team which is also bad.

                                                1. 2

                                                  Lovely write-up. Anyone writing bash code will probably learn a thing or two by reading this post.

                                                  At this point nixpkgs is so tightly coupled to bash, I predict that the performance improvement will come from writing nixpkgs-specific builtin functions.

                                                  1. 1

                                                    At this point nixpkgs is so tightly coupled to bash

                                                    Why is this? I installed nixos once but don’t know much about the details. Seems to run counter to the spirit of the goal of correctness.

                                                    1. 14

                                                      It’s because shell is the most convenient language for building Linux distros. A distro glues together software from disparate sources that weren’t intended to work together (GNU, sqlite, Python, node.js, R, etc.) You need a lot of tiny patches to make it work.

                                                      Shell is convenient for downloading, extracting, verifying, and building such tarballs. You just wget, sha256sum, tar, configure, make, patch, etc.

                                                      Both sides use a lot of shell – upstream has configure scripts which detect what OS/CPU you’re on, etc. and customize the C code. Then the “downstream” distro has more shell to automate the steps above, and then put everything in the right file system locations.

                                                      And then preinstall/postinstall hooks are nearly universal across Linux distros (except Nix?), and also written in shell. They litter the file system with various configuration files, etc. Which is another task easily done in shell.


                                                      Several years ago (2010-2014) I was building Debian containers from scratch with shell scripts, and honestly surprised how hacky it all is. I get why BSD people make fun of Linux distros and Debian – a lot of it is based on misunderstandings and a legacy of hacks that never went away, not any design or reason.

                                                      There are also a half a dozen different languages involved besides shell, particularly make, but also m4, awk, the C preprocessor, sometimes Perl, sometimes Python, etc. It’s what I call “Unix sludge” (Which is now being replaced by “devops sludge”, e.g. shell embedded within YAML and Dockerfiles. Which also suffers from a lack of design.)

                                                      And yes Nix is supposed to be more principled, but they actually use more nasty bash features than almost any Linux distro (and this post adds a few more in the name of performance, e.g. ${x^^} for uppercase). A few years ago I implemented many Oil features based on the needs of Nix, but I think it still hasn’t gotten there:

                                                      https://github.com/oilshell/oil/issues/26

                                                      (Also I think someone here pointed out to me that bash is versioned within Nix which helps with the goal of correctness. I would say it helps with reproducability and debugging, which help with correctness. But bash is still a bad language for the foundations of distros, although probably the best one that exists)

                                                      See Success with Aboriginal, Alpine, and Debian Linux for some more color on this… all distros are based on thousands of lines of shell, and that’s one of the main reasons I’m trying to improve shell.

                                                      One way to think of it is that every programming language has code and data. In C, code is chunks of assembly which you can pass multiple args to and get a return value out of, and data is chunks of (typed or untyped) memory. In shell, code is processes and data is the file system. So shell inherently talks about the things you need to make a Linux distro. If you try it you will see why…

                                                      http://www.linuxfromscratch.org/ is an educational project in that regard.

                                                      1. 1

                                                        It’s because shell is the most convenient language for building Linux distros

                                                        Yeah, I just think it’d be cool, if you’re already making an OS that is so different, to also give up on that paradigm:D

                                                        Not from a cost-benefit POV though, I realize it might be a total overkill given some goals.

                                                        Shell is convenient for downloading, extracting, verifying, and building such tarballs. You just wget, sha256sum, tar, configure, make, patch, etc.

                                                        Sounds like OS monad to me:D (It could be a collection of type classes which would constrain a context, as in Filesystem m => Networking m => ST m ..., you could introduce more complex security assurances, such as precluding a function to typecheck when the context is too powerful, but the solution would probably be more complicated than this example.)

                                                        Both sides use a lot of shell – upstream has configure scripts which detect what OS/CPU you’re on, etc. and customize the C code. Then the “downstream” distro has more shell to automate the steps above, and then put everything in the right file system locations.

                                                        I’m speaking from inexperience but again this sounds like “that’s the way it’s currently done”.

                                                        And then preinstall/postinstall hooks are nearly universal across Linux distros (except Nix?), and also written in shell. They litter the file system with various configuration files, etc. Which is another task easily done in shell.

                                                        This sounds like a problem Nix was made to solve:D These things do make me anxious too, it’s like I have a tech OCD and various things cause me to wipe the disk and reinstall stuff and never really depend on the underlying OS infrastructure (I realize not many people have that luxury). That’s the main reason Nix sounds attractive.

                                                        In shell, code is processes and data is the file system. So shell inherently talks about the things you need to make a Linux distro. If you try it you will see why…

                                                        Well, Unix traditions certainly lead in that direction. Maybe it’s more ergonomic. However it doesn’t really seem to be worth it to collapse these two worlds (code/processes, data/filesystem) for anything larger than <100 lines scripts. I never looked up Oil though, so I don’t know which improvements it brings! But I’m guessing that as the system scales, the improvements go in the direction of becoming a ‘real’ programming language (one that is made with a focus on abstraction and general computing, not gluing stuff together). I might be wrong!

                                                        1. 1

                                                          Yeah it’s a nice thought, but I think you are focusing on a small part of the problem while ignoring a bigger one – that open source software is heterogeneous and nobody controls it all.

                                                          But prove me wrong by building a distro with those ideas :)

                                                          A shorter thing I could have said: “shell helps you manage the bazaar”. Look up “the cathedral vs. the bazaar” if you haven’t heard of it. Haskell is more like a cathedral but distros are like the bazaar.

                                                          That said, check it out HSH in Haskell and Caml-Shcaml too:

                                                          https://github.com/oilshell/oil/wiki/ExternalResources

                                                          And all the Lisp ones. Maybe they will be a head start on the problem.

                                                          I want a better distro with Oil and I’ve been talking to people about that.

                                                          Also, shell composes in ways that are surprising to most people:

                                                          http://www.oilshell.org/blog/tags.html?tag=shell-the-good-parts#shell-the-good-parts

                                                          Oil’s definitely a higher level language with abstraction. Shell already has good abstraction, but people often don’t know how to use it. Oil lets you do all the same stuff but adds structured data types and so forth.

                                                      2. 1

                                                        Bash is convenient to use as the builder because code snippets are easy to assemble as strings. For example each build phase is sent as an environment variable and the main stdenv.mkDerivation loads them using eval. Bash is also more convenient to write to run programs than python or ruby.

                                                        Correctness is preserved mainly by pinning all the dependencies and using a build sandbox to enforce that they are all specified. This makes the results more reproducible.

                                                    1. 1

                                                      Probably a dumb question, but can the user level unionfs mounts be used in a similar way to what Nix does with symbolic links? For example, say:

                                                      pkg1.sha1 depends on lib1.v1.sha2 and lib2.v1.sha3, and pkg2.sha2 on lib1.v2.sha4

                                                      When building pkg1.sha1, union mount pkg1.sha1, lib1.v1.sha2 and lib2.v2.sha3 to a directory dir.sha7, chroot to dir.sha7 so that the libraries are in the correct place, and invoke build commands. Similarly, when using commands pkg1.sha1, use the same procedure.

                                                      Similarly for pkg2.sha2, union mount pkg2.sha2, and lib1.v2.sha4.

                                                      1. 3
                                                        1. 1

                                                          Thanks for the link. I will check that out.

                                                        2. 2

                                                          Hah I’m building a package builder for a Linux distro experiment and plan to do exactly this.

                                                          1. 1

                                                            Any chance for a link? I am extremely interested :).

                                                            1. 2

                                                              There’s nothing public yet. It’s a spare time project that’s has been bumped temporarily for another side project. I jot down thoughts at https://uld.wezm.net/ from time to time.

                                                              1. 1

                                                                Thanks, much appreciated.

                                                          2. 1

                                                            It’s not a dumb question at all.

                                                            One of the downside is that creating new user namespaces, and then mounting all the dependencies is relatively expensive compared to resolving symlinks. It also restricts the usage of the package manager to Linux since other kernels don’t tend to have that feature available (except Plan9).

                                                            The Nix approach is a bit simpler to implement, and also is more composable because it works with both programs and configuration files.

                                                          1. 3

                                                            Rewrites are OK, but they should not be larger than what the current team can handle. This is quite fundamental I think. Each team can handle different project sizes depending on their skills and how well they work with each-other. It’s important to know your limits.

                                                            Instead of doing a full rewrite, try to cut down the existing product into manageable chunks, ideally defined by interfaces. And then replace them one by one. It might seem like it takes longer, but each chunk is something that you can deliver and is not lost.